00:00:00.001 Started by upstream project "autotest-per-patch" build number 132691 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.137 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.137 The recommended git tool is: git 00:00:00.138 using credential 00000000-0000-0000-0000-000000000002 00:00:00.140 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.181 Fetching changes from the remote Git repository 00:00:00.185 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.216 Using shallow fetch with depth 1 00:00:00.216 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.216 > git --version # timeout=10 00:00:00.244 > git --version # 'git version 2.39.2' 00:00:00.244 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.264 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.264 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.814 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.825 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.838 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.838 > git config core.sparsecheckout # timeout=10 00:00:05.849 > git read-tree -mu HEAD # timeout=10 00:00:05.864 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.889 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.889 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.999 [Pipeline] Start of Pipeline 00:00:06.013 [Pipeline] library 00:00:06.014 Loading library shm_lib@master 00:00:06.014 Library shm_lib@master is cached. Copying from home. 00:00:06.029 [Pipeline] node 00:00:06.042 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_3 00:00:06.043 [Pipeline] { 00:00:06.052 [Pipeline] catchError 00:00:06.053 [Pipeline] { 00:00:06.064 [Pipeline] wrap 00:00:06.071 [Pipeline] { 00:00:06.076 [Pipeline] stage 00:00:06.077 [Pipeline] { (Prologue) 00:00:06.092 [Pipeline] echo 00:00:06.093 Node: VM-host-SM38 00:00:06.097 [Pipeline] cleanWs 00:00:06.106 [WS-CLEANUP] Deleting project workspace... 00:00:06.106 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.110 [WS-CLEANUP] done 00:00:06.358 [Pipeline] setCustomBuildProperty 00:00:06.427 [Pipeline] httpRequest 00:00:06.985 [Pipeline] echo 00:00:06.987 Sorcerer 10.211.164.20 is alive 00:00:06.997 [Pipeline] retry 00:00:06.999 [Pipeline] { 00:00:07.015 [Pipeline] httpRequest 00:00:07.019 HttpMethod: GET 00:00:07.020 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.020 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.035 Response Code: HTTP/1.1 200 OK 00:00:07.035 Success: Status code 200 is in the accepted range: 200,404 00:00:07.036 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.771 [Pipeline] } 00:00:13.790 [Pipeline] // retry 00:00:13.798 [Pipeline] sh 00:00:14.078 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.096 [Pipeline] httpRequest 00:00:14.835 [Pipeline] echo 00:00:14.838 Sorcerer 10.211.164.20 is alive 00:00:14.847 [Pipeline] retry 00:00:14.849 [Pipeline] { 00:00:14.863 [Pipeline] httpRequest 00:00:14.868 HttpMethod: GET 00:00:14.869 URL: http://10.211.164.20/packages/spdk_85bc1e85ab0983b7f3814bad50dcfad2df551836.tar.gz 00:00:14.869 Sending request to url: http://10.211.164.20/packages/spdk_85bc1e85ab0983b7f3814bad50dcfad2df551836.tar.gz 00:00:14.877 Response Code: HTTP/1.1 200 OK 00:00:14.878 Success: Status code 200 is in the accepted range: 200,404 00:00:14.878 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_85bc1e85ab0983b7f3814bad50dcfad2df551836.tar.gz 00:01:49.709 [Pipeline] } 00:01:49.726 [Pipeline] // retry 00:01:49.734 [Pipeline] sh 00:01:50.014 + tar --no-same-owner -xf spdk_85bc1e85ab0983b7f3814bad50dcfad2df551836.tar.gz 00:01:52.557 [Pipeline] sh 00:01:52.839 + git -C spdk log --oneline -n5 00:01:52.839 85bc1e85a lib/reduce: Support storing metadata on backing dev. (4 of 5, data unmap with async metadata) 00:01:52.839 bb633fc85 lib/reduce: Support storing metadata on backing dev. (3 of 5, reload process) 00:01:52.839 4985835f7 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata) 00:01:52.839 b4d3c8f7d lib/reduce: Support storing metadata on backing dev. (1 of 5, struct define and init process) 00:01:52.839 3031b0f5f lib/reduce: Delete logic of persisting old chunk map 00:01:52.859 [Pipeline] writeFile 00:01:52.876 [Pipeline] sh 00:01:53.159 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:53.172 [Pipeline] sh 00:01:53.457 + cat autorun-spdk.conf 00:01:53.457 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.457 SPDK_TEST_NVME=1 00:01:53.457 SPDK_TEST_FTL=1 00:01:53.457 SPDK_TEST_ISAL=1 00:01:53.457 SPDK_RUN_ASAN=1 00:01:53.457 SPDK_RUN_UBSAN=1 00:01:53.457 SPDK_TEST_XNVME=1 00:01:53.457 SPDK_TEST_NVME_FDP=1 00:01:53.457 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.466 RUN_NIGHTLY=0 00:01:53.468 [Pipeline] } 00:01:53.482 [Pipeline] // stage 00:01:53.499 [Pipeline] stage 00:01:53.501 [Pipeline] { (Run VM) 00:01:53.515 [Pipeline] sh 00:01:53.797 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:53.797 + echo 'Start stage prepare_nvme.sh' 00:01:53.797 Start stage prepare_nvme.sh 00:01:53.797 + [[ -n 8 ]] 00:01:53.797 + disk_prefix=ex8 00:01:53.797 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:01:53.797 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:01:53.797 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:01:53.797 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.797 ++ SPDK_TEST_NVME=1 00:01:53.797 ++ SPDK_TEST_FTL=1 00:01:53.797 ++ SPDK_TEST_ISAL=1 00:01:53.797 ++ SPDK_RUN_ASAN=1 00:01:53.797 ++ SPDK_RUN_UBSAN=1 00:01:53.797 ++ SPDK_TEST_XNVME=1 00:01:53.797 ++ SPDK_TEST_NVME_FDP=1 00:01:53.797 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.797 ++ RUN_NIGHTLY=0 00:01:53.797 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:01:53.797 + nvme_files=() 00:01:53.797 + declare -A nvme_files 00:01:53.797 + backend_dir=/var/lib/libvirt/images/backends 00:01:53.797 + nvme_files['nvme.img']=5G 00:01:53.797 + nvme_files['nvme-cmb.img']=5G 00:01:53.797 + nvme_files['nvme-multi0.img']=4G 00:01:53.797 + nvme_files['nvme-multi1.img']=4G 00:01:53.797 + nvme_files['nvme-multi2.img']=4G 00:01:53.797 + nvme_files['nvme-openstack.img']=8G 00:01:53.797 + nvme_files['nvme-zns.img']=5G 00:01:53.797 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:53.797 + (( SPDK_TEST_FTL == 1 )) 00:01:53.797 + nvme_files["nvme-ftl.img"]=6G 00:01:53.797 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:53.797 + nvme_files["nvme-fdp.img"]=1G 00:01:53.797 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:53.797 + for nvme in "${!nvme_files[@]}" 00:01:53.797 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:01:53.797 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:53.797 + for nvme in "${!nvme_files[@]}" 00:01:53.797 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:01:54.055 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:54.055 + for nvme in "${!nvme_files[@]}" 00:01:54.055 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:01:54.055 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:54.055 + for nvme in "${!nvme_files[@]}" 00:01:54.055 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:01:54.055 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:54.055 + for nvme in "${!nvme_files[@]}" 00:01:54.055 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:54.055 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:54.055 + for nvme in "${!nvme_files[@]}" 00:01:54.055 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:54.055 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:54.055 + for nvme in "${!nvme_files[@]}" 00:01:54.055 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:54.321 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:54.321 + for nvme in "${!nvme_files[@]}" 00:01:54.321 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:01:54.321 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:54.321 + for nvme in "${!nvme_files[@]}" 00:01:54.321 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:54.583 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:54.583 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:54.583 + echo 'End stage prepare_nvme.sh' 00:01:54.583 End stage prepare_nvme.sh 00:01:54.596 [Pipeline] sh 00:01:54.880 + DISTRO=fedora39 00:01:54.880 + CPUS=10 00:01:54.880 + RAM=12288 00:01:54.880 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:54.880 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:54.880 00:01:54.880 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:01:54.880 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:01:54.880 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:01:54.880 HELP=0 00:01:54.880 DRY_RUN=0 00:01:54.880 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:01:54.880 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:54.880 NVME_AUTO_CREATE=0 00:01:54.880 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:01:54.880 NVME_CMB=,,,, 00:01:54.880 NVME_PMR=,,,, 00:01:54.880 NVME_ZNS=,,,, 00:01:54.880 NVME_MS=true,,,, 00:01:54.880 NVME_FDP=,,,on, 00:01:54.880 SPDK_VAGRANT_DISTRO=fedora39 00:01:54.880 SPDK_VAGRANT_VMCPU=10 00:01:54.880 SPDK_VAGRANT_VMRAM=12288 00:01:54.880 SPDK_VAGRANT_PROVIDER=libvirt 00:01:54.880 SPDK_VAGRANT_HTTP_PROXY= 00:01:54.880 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:54.880 SPDK_OPENSTACK_NETWORK=0 00:01:54.880 VAGRANT_PACKAGE_BOX=0 00:01:54.880 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:01:54.880 FORCE_DISTRO=true 00:01:54.880 VAGRANT_BOX_VERSION= 00:01:54.880 EXTRA_VAGRANTFILES= 00:01:54.880 NIC_MODEL=e1000 00:01:54.880 00:01:54.880 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt' 00:01:54.880 /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:01:57.416 Bringing machine 'default' up with 'libvirt' provider... 00:01:57.673 ==> default: Creating image (snapshot of base box volume). 00:01:57.673 ==> default: Creating domain with the following settings... 00:01:57.673 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733400328_7be9acc28ba5af7a4682 00:01:57.673 ==> default: -- Domain type: kvm 00:01:57.673 ==> default: -- Cpus: 10 00:01:57.673 ==> default: -- Feature: acpi 00:01:57.673 ==> default: -- Feature: apic 00:01:57.673 ==> default: -- Feature: pae 00:01:57.673 ==> default: -- Memory: 12288M 00:01:57.673 ==> default: -- Memory Backing: hugepages: 00:01:57.673 ==> default: -- Management MAC: 00:01:57.673 ==> default: -- Loader: 00:01:57.673 ==> default: -- Nvram: 00:01:57.673 ==> default: -- Base box: spdk/fedora39 00:01:57.673 ==> default: -- Storage pool: default 00:01:57.673 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733400328_7be9acc28ba5af7a4682.img (20G) 00:01:57.673 ==> default: -- Volume Cache: default 00:01:57.673 ==> default: -- Kernel: 00:01:57.673 ==> default: -- Initrd: 00:01:57.673 ==> default: -- Graphics Type: vnc 00:01:57.673 ==> default: -- Graphics Port: -1 00:01:57.673 ==> default: -- Graphics IP: 127.0.0.1 00:01:57.673 ==> default: -- Graphics Password: Not defined 00:01:57.673 ==> default: -- Video Type: cirrus 00:01:57.673 ==> default: -- Video VRAM: 9216 00:01:57.673 ==> default: -- Sound Type: 00:01:57.673 ==> default: -- Keymap: en-us 00:01:57.673 ==> default: -- TPM Path: 00:01:57.673 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:57.673 ==> default: -- Command line args: 00:01:57.673 ==> default: -> value=-device, 00:01:57.674 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:57.674 ==> default: -> value=-drive, 00:01:57.674 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:57.674 ==> default: -> value=-device, 00:01:57.674 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:57.674 ==> default: -> value=-device, 00:01:57.674 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:57.674 ==> default: -> value=-drive, 00:01:57.674 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:01:57.674 ==> default: -> value=-device, 00:01:57.674 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.674 ==> default: -> value=-device, 00:01:57.674 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:57.674 ==> default: -> value=-drive, 00:01:57.674 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:57.674 ==> default: -> value=-device, 00:01:57.674 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.674 ==> default: -> value=-drive, 00:01:57.674 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:57.674 ==> default: -> value=-device, 00:01:57.674 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.674 ==> default: -> value=-drive, 00:01:57.674 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:57.674 ==> default: -> value=-device, 00:01:57.674 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.674 ==> default: -> value=-device, 00:01:57.674 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:57.674 ==> default: -> value=-device, 00:01:57.674 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:57.674 ==> default: -> value=-drive, 00:01:57.674 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:57.674 ==> default: -> value=-device, 00:01:57.674 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.674 ==> default: Creating shared folders metadata... 00:01:57.674 ==> default: Starting domain. 00:01:59.043 ==> default: Waiting for domain to get an IP address... 00:02:17.156 ==> default: Waiting for SSH to become available... 00:02:17.156 ==> default: Configuring and enabling network interfaces... 00:02:20.505 default: SSH address: 192.168.121.156:22 00:02:20.505 default: SSH username: vagrant 00:02:20.505 default: SSH auth method: private key 00:02:23.049 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:31.235 ==> default: Mounting SSHFS shared folder... 00:02:33.154 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:33.154 ==> default: Checking Mount.. 00:02:34.546 ==> default: Folder Successfully Mounted! 00:02:34.546 00:02:34.546 SUCCESS! 00:02:34.546 00:02:34.546 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:02:34.546 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:34.546 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:02:34.546 00:02:34.554 [Pipeline] } 00:02:34.566 [Pipeline] // stage 00:02:34.574 [Pipeline] dir 00:02:34.574 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt 00:02:34.576 [Pipeline] { 00:02:34.586 [Pipeline] catchError 00:02:34.588 [Pipeline] { 00:02:34.596 [Pipeline] sh 00:02:34.872 + vagrant ssh-config --host vagrant 00:02:34.872 + sed -ne '/^Host/,$p' 00:02:34.872 + tee ssh_conf 00:02:38.164 Host vagrant 00:02:38.164 HostName 192.168.121.156 00:02:38.164 User vagrant 00:02:38.164 Port 22 00:02:38.164 UserKnownHostsFile /dev/null 00:02:38.164 StrictHostKeyChecking no 00:02:38.164 PasswordAuthentication no 00:02:38.164 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:38.164 IdentitiesOnly yes 00:02:38.164 LogLevel FATAL 00:02:38.164 ForwardAgent yes 00:02:38.164 ForwardX11 yes 00:02:38.164 00:02:38.179 [Pipeline] withEnv 00:02:38.181 [Pipeline] { 00:02:38.194 [Pipeline] sh 00:02:38.478 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:38.478 source /etc/os-release 00:02:38.478 [[ -e /image.version ]] && img=$(< /image.version) 00:02:38.478 # Minimal, systemd-like check. 00:02:38.478 if [[ -e /.dockerenv ]]; then 00:02:38.478 # Clear garbage from the node'\''s name: 00:02:38.478 # agt-er_autotest_547-896 -> autotest_547-896 00:02:38.478 # $HOSTNAME is the actual container id 00:02:38.478 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:38.478 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:38.478 # We can assume this is a mount from a host where container is running, 00:02:38.478 # so fetch its hostname to easily identify the target swarm worker. 00:02:38.478 container="$(< /etc/hostname) ($agent)" 00:02:38.478 else 00:02:38.478 # Fallback 00:02:38.478 container=$agent 00:02:38.478 fi 00:02:38.478 fi 00:02:38.478 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:38.478 ' 00:02:38.752 [Pipeline] } 00:02:38.767 [Pipeline] // withEnv 00:02:38.775 [Pipeline] setCustomBuildProperty 00:02:38.789 [Pipeline] stage 00:02:38.791 [Pipeline] { (Tests) 00:02:38.807 [Pipeline] sh 00:02:39.092 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:39.390 [Pipeline] sh 00:02:39.731 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:40.010 [Pipeline] timeout 00:02:40.010 Timeout set to expire in 50 min 00:02:40.012 [Pipeline] { 00:02:40.028 [Pipeline] sh 00:02:40.314 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:40.886 HEAD is now at 85bc1e85a lib/reduce: Support storing metadata on backing dev. (4 of 5, data unmap with async metadata) 00:02:40.899 [Pipeline] sh 00:02:41.186 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:41.463 [Pipeline] sh 00:02:41.747 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:42.027 [Pipeline] sh 00:02:42.315 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:42.576 ++ readlink -f spdk_repo 00:02:42.576 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:42.576 + [[ -n /home/vagrant/spdk_repo ]] 00:02:42.576 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:42.576 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:42.576 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:42.576 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:42.576 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:42.576 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:42.576 + cd /home/vagrant/spdk_repo 00:02:42.576 + source /etc/os-release 00:02:42.576 ++ NAME='Fedora Linux' 00:02:42.576 ++ VERSION='39 (Cloud Edition)' 00:02:42.576 ++ ID=fedora 00:02:42.576 ++ VERSION_ID=39 00:02:42.576 ++ VERSION_CODENAME= 00:02:42.576 ++ PLATFORM_ID=platform:f39 00:02:42.576 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:42.576 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:42.576 ++ LOGO=fedora-logo-icon 00:02:42.576 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:42.576 ++ HOME_URL=https://fedoraproject.org/ 00:02:42.576 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:42.576 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:42.576 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:42.576 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:42.576 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:42.576 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:42.576 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:42.576 ++ SUPPORT_END=2024-11-12 00:02:42.576 ++ VARIANT='Cloud Edition' 00:02:42.576 ++ VARIANT_ID=cloud 00:02:42.576 + uname -a 00:02:42.576 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:42.576 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:42.836 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:43.408 Hugepages 00:02:43.408 node hugesize free / total 00:02:43.408 node0 1048576kB 0 / 0 00:02:43.408 node0 2048kB 0 / 0 00:02:43.408 00:02:43.408 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:43.408 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:43.408 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:43.408 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:43.408 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:43.408 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:43.408 + rm -f /tmp/spdk-ld-path 00:02:43.408 + source autorun-spdk.conf 00:02:43.408 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.408 ++ SPDK_TEST_NVME=1 00:02:43.408 ++ SPDK_TEST_FTL=1 00:02:43.408 ++ SPDK_TEST_ISAL=1 00:02:43.408 ++ SPDK_RUN_ASAN=1 00:02:43.408 ++ SPDK_RUN_UBSAN=1 00:02:43.408 ++ SPDK_TEST_XNVME=1 00:02:43.408 ++ SPDK_TEST_NVME_FDP=1 00:02:43.408 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:43.408 ++ RUN_NIGHTLY=0 00:02:43.408 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:43.408 + [[ -n '' ]] 00:02:43.408 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:43.408 + for M in /var/spdk/build-*-manifest.txt 00:02:43.408 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:43.408 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:43.408 + for M in /var/spdk/build-*-manifest.txt 00:02:43.408 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:43.408 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:43.408 + for M in /var/spdk/build-*-manifest.txt 00:02:43.408 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:43.408 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:43.408 ++ uname 00:02:43.408 + [[ Linux == \L\i\n\u\x ]] 00:02:43.408 + sudo dmesg -T 00:02:43.408 + sudo dmesg --clear 00:02:43.408 + dmesg_pid=5021 00:02:43.408 + [[ Fedora Linux == FreeBSD ]] 00:02:43.408 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:43.408 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:43.408 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:43.408 + [[ -x /usr/src/fio-static/fio ]] 00:02:43.408 + sudo dmesg -Tw 00:02:43.408 + export FIO_BIN=/usr/src/fio-static/fio 00:02:43.408 + FIO_BIN=/usr/src/fio-static/fio 00:02:43.408 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:43.408 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:43.408 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:43.408 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:43.408 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:43.408 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:43.408 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:43.408 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:43.408 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:43.670 12:06:14 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:43.670 12:06:14 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:43.670 12:06:14 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.670 12:06:14 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:43.670 12:06:14 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:43.670 12:06:14 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:43.670 12:06:14 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:43.670 12:06:14 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:43.670 12:06:14 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:43.670 12:06:14 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:43.670 12:06:14 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:43.670 12:06:14 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:43.670 12:06:14 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:43.670 12:06:14 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:43.670 12:06:14 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:43.670 12:06:14 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:43.670 12:06:14 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:43.670 12:06:14 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:43.670 12:06:14 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:43.670 12:06:14 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:43.670 12:06:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.670 12:06:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.670 12:06:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.670 12:06:14 -- paths/export.sh@5 -- $ export PATH 00:02:43.670 12:06:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.670 12:06:14 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:43.670 12:06:14 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:43.670 12:06:14 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733400374.XXXXXX 00:02:43.670 12:06:14 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733400374.aXrxyJ 00:02:43.670 12:06:14 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:43.670 12:06:14 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:43.670 12:06:14 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:43.670 12:06:14 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:43.670 12:06:14 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:43.670 12:06:14 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:43.670 12:06:14 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:43.670 12:06:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.670 12:06:14 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:43.670 12:06:14 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:43.670 12:06:14 -- pm/common@17 -- $ local monitor 00:02:43.670 12:06:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.670 12:06:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.670 12:06:14 -- pm/common@25 -- $ sleep 1 00:02:43.670 12:06:14 -- pm/common@21 -- $ date +%s 00:02:43.670 12:06:14 -- pm/common@21 -- $ date +%s 00:02:43.670 12:06:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733400374 00:02:43.670 12:06:14 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733400374 00:02:43.670 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733400374_collect-vmstat.pm.log 00:02:43.670 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733400374_collect-cpu-load.pm.log 00:02:44.612 12:06:15 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:44.612 12:06:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:44.612 12:06:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:44.612 12:06:15 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:44.612 12:06:15 -- spdk/autobuild.sh@16 -- $ date -u 00:02:44.612 Thu Dec 5 12:06:15 PM UTC 2024 00:02:44.612 12:06:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:44.612 v25.01-pre-286-g85bc1e85a 00:02:44.612 12:06:15 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:44.612 12:06:15 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:44.612 12:06:15 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:44.613 12:06:15 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:44.613 12:06:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.613 ************************************ 00:02:44.613 START TEST asan 00:02:44.613 ************************************ 00:02:44.613 using asan 00:02:44.613 12:06:15 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:44.613 00:02:44.613 real 0m0.000s 00:02:44.613 user 0m0.000s 00:02:44.613 sys 0m0.000s 00:02:44.613 ************************************ 00:02:44.613 END TEST asan 00:02:44.613 ************************************ 00:02:44.613 12:06:15 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:44.613 12:06:15 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:44.874 12:06:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:44.874 12:06:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:44.874 12:06:15 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:44.874 12:06:15 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:44.874 12:06:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.874 ************************************ 00:02:44.874 START TEST ubsan 00:02:44.874 ************************************ 00:02:44.874 using ubsan 00:02:44.874 12:06:15 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:44.874 00:02:44.874 real 0m0.000s 00:02:44.874 user 0m0.000s 00:02:44.874 sys 0m0.000s 00:02:44.874 12:06:15 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:44.874 ************************************ 00:02:44.874 END TEST ubsan 00:02:44.874 ************************************ 00:02:44.874 12:06:15 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:44.874 12:06:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:44.874 12:06:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:44.874 12:06:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:44.874 12:06:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:44.874 12:06:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:44.874 12:06:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:44.874 12:06:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:44.874 12:06:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:44.874 12:06:15 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:44.874 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:44.874 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:45.446 Using 'verbs' RDMA provider 00:02:58.613 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:10.817 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:10.817 Creating mk/config.mk...done. 00:03:10.817 Creating mk/cc.flags.mk...done. 00:03:10.817 Type 'make' to build. 00:03:10.817 12:06:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:10.817 12:06:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:10.818 12:06:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:10.818 12:06:40 -- common/autotest_common.sh@10 -- $ set +x 00:03:10.818 ************************************ 00:03:10.818 START TEST make 00:03:10.818 ************************************ 00:03:10.818 12:06:40 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:10.818 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:10.818 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:10.818 meson setup builddir \ 00:03:10.818 -Dwith-libaio=enabled \ 00:03:10.818 -Dwith-liburing=enabled \ 00:03:10.818 -Dwith-libvfn=disabled \ 00:03:10.818 -Dwith-spdk=disabled \ 00:03:10.818 -Dexamples=false \ 00:03:10.818 -Dtests=false \ 00:03:10.818 -Dtools=false && \ 00:03:10.818 meson compile -C builddir && \ 00:03:10.818 cd -) 00:03:10.818 make[1]: Nothing to be done for 'all'. 00:03:12.198 The Meson build system 00:03:12.198 Version: 1.5.0 00:03:12.198 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:12.198 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:12.198 Build type: native build 00:03:12.198 Project name: xnvme 00:03:12.198 Project version: 0.7.5 00:03:12.198 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:12.198 C linker for the host machine: cc ld.bfd 2.40-14 00:03:12.198 Host machine cpu family: x86_64 00:03:12.198 Host machine cpu: x86_64 00:03:12.198 Message: host_machine.system: linux 00:03:12.198 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:12.198 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:12.198 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:12.198 Run-time dependency threads found: YES 00:03:12.198 Has header "setupapi.h" : NO 00:03:12.198 Has header "linux/blkzoned.h" : YES 00:03:12.198 Has header "linux/blkzoned.h" : YES (cached) 00:03:12.198 Has header "libaio.h" : YES 00:03:12.198 Library aio found: YES 00:03:12.198 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:12.198 Run-time dependency liburing found: YES 2.2 00:03:12.198 Dependency libvfn skipped: feature with-libvfn disabled 00:03:12.198 Found CMake: /usr/bin/cmake (3.27.7) 00:03:12.198 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:12.198 Subproject spdk : skipped: feature with-spdk disabled 00:03:12.198 Run-time dependency appleframeworks found: NO (tried framework) 00:03:12.198 Run-time dependency appleframeworks found: NO (tried framework) 00:03:12.198 Library rt found: YES 00:03:12.198 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:12.198 Configuring xnvme_config.h using configuration 00:03:12.198 Configuring xnvme.spec using configuration 00:03:12.198 Run-time dependency bash-completion found: YES 2.11 00:03:12.198 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:12.198 Program cp found: YES (/usr/bin/cp) 00:03:12.198 Build targets in project: 3 00:03:12.198 00:03:12.198 xnvme 0.7.5 00:03:12.198 00:03:12.198 Subprojects 00:03:12.198 spdk : NO Feature 'with-spdk' disabled 00:03:12.198 00:03:12.198 User defined options 00:03:12.198 examples : false 00:03:12.198 tests : false 00:03:12.198 tools : false 00:03:12.198 with-libaio : enabled 00:03:12.198 with-liburing: enabled 00:03:12.198 with-libvfn : disabled 00:03:12.198 with-spdk : disabled 00:03:12.198 00:03:12.198 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:12.766 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:12.766 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:12.766 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:12.766 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:12.766 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:12.766 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:12.766 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:12.766 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:12.766 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:12.766 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:12.766 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:12.766 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:12.766 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:12.766 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:13.025 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:13.025 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:13.025 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:13.025 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:13.025 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:13.025 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:13.025 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:13.025 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:13.025 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:13.025 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:13.025 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:13.025 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:13.025 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:13.025 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:13.025 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:13.025 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:13.025 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:13.025 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:13.025 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:13.025 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:13.025 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:13.025 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:13.025 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:13.025 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:13.025 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:13.025 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:13.025 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:13.025 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:13.025 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:13.025 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:13.025 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:13.025 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:13.025 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:13.025 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:13.025 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:13.025 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:13.025 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:13.025 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:13.284 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:13.284 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:13.284 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:13.284 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:13.284 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:13.284 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:13.284 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:13.284 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:13.284 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:13.284 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:13.284 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:13.284 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:13.284 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:13.284 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:13.284 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:13.284 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:13.284 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:13.284 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:13.284 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:13.541 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:13.541 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:13.541 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:13.800 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:13.800 [75/76] Linking static target lib/libxnvme.a 00:03:13.800 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:13.800 INFO: autodetecting backend as ninja 00:03:13.800 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:14.057 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:20.609 The Meson build system 00:03:20.609 Version: 1.5.0 00:03:20.609 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:20.609 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:20.609 Build type: native build 00:03:20.609 Program cat found: YES (/usr/bin/cat) 00:03:20.609 Project name: DPDK 00:03:20.609 Project version: 24.03.0 00:03:20.609 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:20.609 C linker for the host machine: cc ld.bfd 2.40-14 00:03:20.609 Host machine cpu family: x86_64 00:03:20.609 Host machine cpu: x86_64 00:03:20.609 Message: ## Building in Developer Mode ## 00:03:20.609 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:20.609 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:20.609 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:20.609 Program python3 found: YES (/usr/bin/python3) 00:03:20.609 Program cat found: YES (/usr/bin/cat) 00:03:20.609 Compiler for C supports arguments -march=native: YES 00:03:20.609 Checking for size of "void *" : 8 00:03:20.609 Checking for size of "void *" : 8 (cached) 00:03:20.609 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:20.609 Library m found: YES 00:03:20.609 Library numa found: YES 00:03:20.609 Has header "numaif.h" : YES 00:03:20.609 Library fdt found: NO 00:03:20.609 Library execinfo found: NO 00:03:20.609 Has header "execinfo.h" : YES 00:03:20.609 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:20.609 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:20.609 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:20.609 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:20.609 Run-time dependency openssl found: YES 3.1.1 00:03:20.609 Run-time dependency libpcap found: YES 1.10.4 00:03:20.609 Has header "pcap.h" with dependency libpcap: YES 00:03:20.609 Compiler for C supports arguments -Wcast-qual: YES 00:03:20.609 Compiler for C supports arguments -Wdeprecated: YES 00:03:20.609 Compiler for C supports arguments -Wformat: YES 00:03:20.609 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:20.609 Compiler for C supports arguments -Wformat-security: NO 00:03:20.609 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:20.609 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:20.609 Compiler for C supports arguments -Wnested-externs: YES 00:03:20.609 Compiler for C supports arguments -Wold-style-definition: YES 00:03:20.609 Compiler for C supports arguments -Wpointer-arith: YES 00:03:20.609 Compiler for C supports arguments -Wsign-compare: YES 00:03:20.609 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:20.609 Compiler for C supports arguments -Wundef: YES 00:03:20.609 Compiler for C supports arguments -Wwrite-strings: YES 00:03:20.609 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:20.609 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:20.609 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:20.609 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:20.609 Program objdump found: YES (/usr/bin/objdump) 00:03:20.609 Compiler for C supports arguments -mavx512f: YES 00:03:20.609 Checking if "AVX512 checking" compiles: YES 00:03:20.609 Fetching value of define "__SSE4_2__" : 1 00:03:20.609 Fetching value of define "__AES__" : 1 00:03:20.609 Fetching value of define "__AVX__" : 1 00:03:20.609 Fetching value of define "__AVX2__" : 1 00:03:20.609 Fetching value of define "__AVX512BW__" : 1 00:03:20.609 Fetching value of define "__AVX512CD__" : 1 00:03:20.609 Fetching value of define "__AVX512DQ__" : 1 00:03:20.609 Fetching value of define "__AVX512F__" : 1 00:03:20.609 Fetching value of define "__AVX512VL__" : 1 00:03:20.609 Fetching value of define "__PCLMUL__" : 1 00:03:20.609 Fetching value of define "__RDRND__" : 1 00:03:20.609 Fetching value of define "__RDSEED__" : 1 00:03:20.609 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:20.609 Fetching value of define "__znver1__" : (undefined) 00:03:20.609 Fetching value of define "__znver2__" : (undefined) 00:03:20.609 Fetching value of define "__znver3__" : (undefined) 00:03:20.609 Fetching value of define "__znver4__" : (undefined) 00:03:20.609 Library asan found: YES 00:03:20.609 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:20.609 Message: lib/log: Defining dependency "log" 00:03:20.609 Message: lib/kvargs: Defining dependency "kvargs" 00:03:20.609 Message: lib/telemetry: Defining dependency "telemetry" 00:03:20.609 Library rt found: YES 00:03:20.609 Checking for function "getentropy" : NO 00:03:20.609 Message: lib/eal: Defining dependency "eal" 00:03:20.609 Message: lib/ring: Defining dependency "ring" 00:03:20.609 Message: lib/rcu: Defining dependency "rcu" 00:03:20.609 Message: lib/mempool: Defining dependency "mempool" 00:03:20.609 Message: lib/mbuf: Defining dependency "mbuf" 00:03:20.609 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:20.609 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:20.609 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:20.609 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:20.609 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:20.609 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:20.609 Compiler for C supports arguments -mpclmul: YES 00:03:20.609 Compiler for C supports arguments -maes: YES 00:03:20.609 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:20.609 Compiler for C supports arguments -mavx512bw: YES 00:03:20.609 Compiler for C supports arguments -mavx512dq: YES 00:03:20.609 Compiler for C supports arguments -mavx512vl: YES 00:03:20.609 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:20.609 Compiler for C supports arguments -mavx2: YES 00:03:20.609 Compiler for C supports arguments -mavx: YES 00:03:20.609 Message: lib/net: Defining dependency "net" 00:03:20.609 Message: lib/meter: Defining dependency "meter" 00:03:20.609 Message: lib/ethdev: Defining dependency "ethdev" 00:03:20.609 Message: lib/pci: Defining dependency "pci" 00:03:20.609 Message: lib/cmdline: Defining dependency "cmdline" 00:03:20.609 Message: lib/hash: Defining dependency "hash" 00:03:20.609 Message: lib/timer: Defining dependency "timer" 00:03:20.609 Message: lib/compressdev: Defining dependency "compressdev" 00:03:20.609 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:20.609 Message: lib/dmadev: Defining dependency "dmadev" 00:03:20.609 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:20.609 Message: lib/power: Defining dependency "power" 00:03:20.609 Message: lib/reorder: Defining dependency "reorder" 00:03:20.609 Message: lib/security: Defining dependency "security" 00:03:20.609 Has header "linux/userfaultfd.h" : YES 00:03:20.609 Has header "linux/vduse.h" : YES 00:03:20.609 Message: lib/vhost: Defining dependency "vhost" 00:03:20.609 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:20.609 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:20.609 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:20.609 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:20.609 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:20.609 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:20.609 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:20.609 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:20.609 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:20.609 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:20.609 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:20.609 Configuring doxy-api-html.conf using configuration 00:03:20.609 Configuring doxy-api-man.conf using configuration 00:03:20.609 Program mandb found: YES (/usr/bin/mandb) 00:03:20.609 Program sphinx-build found: NO 00:03:20.609 Configuring rte_build_config.h using configuration 00:03:20.609 Message: 00:03:20.609 ================= 00:03:20.609 Applications Enabled 00:03:20.609 ================= 00:03:20.609 00:03:20.609 apps: 00:03:20.609 00:03:20.609 00:03:20.609 Message: 00:03:20.609 ================= 00:03:20.609 Libraries Enabled 00:03:20.609 ================= 00:03:20.609 00:03:20.609 libs: 00:03:20.609 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:20.609 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:20.609 cryptodev, dmadev, power, reorder, security, vhost, 00:03:20.609 00:03:20.609 Message: 00:03:20.609 =============== 00:03:20.609 Drivers Enabled 00:03:20.609 =============== 00:03:20.609 00:03:20.609 common: 00:03:20.609 00:03:20.609 bus: 00:03:20.609 pci, vdev, 00:03:20.609 mempool: 00:03:20.609 ring, 00:03:20.609 dma: 00:03:20.609 00:03:20.609 net: 00:03:20.609 00:03:20.609 crypto: 00:03:20.609 00:03:20.609 compress: 00:03:20.609 00:03:20.609 vdpa: 00:03:20.609 00:03:20.609 00:03:20.609 Message: 00:03:20.609 ================= 00:03:20.609 Content Skipped 00:03:20.609 ================= 00:03:20.609 00:03:20.609 apps: 00:03:20.610 dumpcap: explicitly disabled via build config 00:03:20.610 graph: explicitly disabled via build config 00:03:20.610 pdump: explicitly disabled via build config 00:03:20.610 proc-info: explicitly disabled via build config 00:03:20.610 test-acl: explicitly disabled via build config 00:03:20.610 test-bbdev: explicitly disabled via build config 00:03:20.610 test-cmdline: explicitly disabled via build config 00:03:20.610 test-compress-perf: explicitly disabled via build config 00:03:20.610 test-crypto-perf: explicitly disabled via build config 00:03:20.610 test-dma-perf: explicitly disabled via build config 00:03:20.610 test-eventdev: explicitly disabled via build config 00:03:20.610 test-fib: explicitly disabled via build config 00:03:20.610 test-flow-perf: explicitly disabled via build config 00:03:20.610 test-gpudev: explicitly disabled via build config 00:03:20.610 test-mldev: explicitly disabled via build config 00:03:20.610 test-pipeline: explicitly disabled via build config 00:03:20.610 test-pmd: explicitly disabled via build config 00:03:20.610 test-regex: explicitly disabled via build config 00:03:20.610 test-sad: explicitly disabled via build config 00:03:20.610 test-security-perf: explicitly disabled via build config 00:03:20.610 00:03:20.610 libs: 00:03:20.610 argparse: explicitly disabled via build config 00:03:20.610 metrics: explicitly disabled via build config 00:03:20.610 acl: explicitly disabled via build config 00:03:20.610 bbdev: explicitly disabled via build config 00:03:20.610 bitratestats: explicitly disabled via build config 00:03:20.610 bpf: explicitly disabled via build config 00:03:20.610 cfgfile: explicitly disabled via build config 00:03:20.610 distributor: explicitly disabled via build config 00:03:20.610 efd: explicitly disabled via build config 00:03:20.610 eventdev: explicitly disabled via build config 00:03:20.610 dispatcher: explicitly disabled via build config 00:03:20.610 gpudev: explicitly disabled via build config 00:03:20.610 gro: explicitly disabled via build config 00:03:20.610 gso: explicitly disabled via build config 00:03:20.610 ip_frag: explicitly disabled via build config 00:03:20.610 jobstats: explicitly disabled via build config 00:03:20.610 latencystats: explicitly disabled via build config 00:03:20.610 lpm: explicitly disabled via build config 00:03:20.610 member: explicitly disabled via build config 00:03:20.610 pcapng: explicitly disabled via build config 00:03:20.610 rawdev: explicitly disabled via build config 00:03:20.610 regexdev: explicitly disabled via build config 00:03:20.610 mldev: explicitly disabled via build config 00:03:20.610 rib: explicitly disabled via build config 00:03:20.610 sched: explicitly disabled via build config 00:03:20.610 stack: explicitly disabled via build config 00:03:20.610 ipsec: explicitly disabled via build config 00:03:20.610 pdcp: explicitly disabled via build config 00:03:20.610 fib: explicitly disabled via build config 00:03:20.610 port: explicitly disabled via build config 00:03:20.610 pdump: explicitly disabled via build config 00:03:20.610 table: explicitly disabled via build config 00:03:20.610 pipeline: explicitly disabled via build config 00:03:20.610 graph: explicitly disabled via build config 00:03:20.610 node: explicitly disabled via build config 00:03:20.610 00:03:20.610 drivers: 00:03:20.610 common/cpt: not in enabled drivers build config 00:03:20.610 common/dpaax: not in enabled drivers build config 00:03:20.610 common/iavf: not in enabled drivers build config 00:03:20.610 common/idpf: not in enabled drivers build config 00:03:20.610 common/ionic: not in enabled drivers build config 00:03:20.610 common/mvep: not in enabled drivers build config 00:03:20.610 common/octeontx: not in enabled drivers build config 00:03:20.610 bus/auxiliary: not in enabled drivers build config 00:03:20.610 bus/cdx: not in enabled drivers build config 00:03:20.610 bus/dpaa: not in enabled drivers build config 00:03:20.610 bus/fslmc: not in enabled drivers build config 00:03:20.610 bus/ifpga: not in enabled drivers build config 00:03:20.610 bus/platform: not in enabled drivers build config 00:03:20.610 bus/uacce: not in enabled drivers build config 00:03:20.610 bus/vmbus: not in enabled drivers build config 00:03:20.610 common/cnxk: not in enabled drivers build config 00:03:20.610 common/mlx5: not in enabled drivers build config 00:03:20.610 common/nfp: not in enabled drivers build config 00:03:20.610 common/nitrox: not in enabled drivers build config 00:03:20.610 common/qat: not in enabled drivers build config 00:03:20.610 common/sfc_efx: not in enabled drivers build config 00:03:20.610 mempool/bucket: not in enabled drivers build config 00:03:20.610 mempool/cnxk: not in enabled drivers build config 00:03:20.610 mempool/dpaa: not in enabled drivers build config 00:03:20.610 mempool/dpaa2: not in enabled drivers build config 00:03:20.610 mempool/octeontx: not in enabled drivers build config 00:03:20.610 mempool/stack: not in enabled drivers build config 00:03:20.610 dma/cnxk: not in enabled drivers build config 00:03:20.610 dma/dpaa: not in enabled drivers build config 00:03:20.610 dma/dpaa2: not in enabled drivers build config 00:03:20.610 dma/hisilicon: not in enabled drivers build config 00:03:20.610 dma/idxd: not in enabled drivers build config 00:03:20.610 dma/ioat: not in enabled drivers build config 00:03:20.610 dma/skeleton: not in enabled drivers build config 00:03:20.610 net/af_packet: not in enabled drivers build config 00:03:20.610 net/af_xdp: not in enabled drivers build config 00:03:20.610 net/ark: not in enabled drivers build config 00:03:20.610 net/atlantic: not in enabled drivers build config 00:03:20.610 net/avp: not in enabled drivers build config 00:03:20.610 net/axgbe: not in enabled drivers build config 00:03:20.610 net/bnx2x: not in enabled drivers build config 00:03:20.610 net/bnxt: not in enabled drivers build config 00:03:20.610 net/bonding: not in enabled drivers build config 00:03:20.610 net/cnxk: not in enabled drivers build config 00:03:20.610 net/cpfl: not in enabled drivers build config 00:03:20.610 net/cxgbe: not in enabled drivers build config 00:03:20.610 net/dpaa: not in enabled drivers build config 00:03:20.610 net/dpaa2: not in enabled drivers build config 00:03:20.610 net/e1000: not in enabled drivers build config 00:03:20.610 net/ena: not in enabled drivers build config 00:03:20.610 net/enetc: not in enabled drivers build config 00:03:20.610 net/enetfec: not in enabled drivers build config 00:03:20.610 net/enic: not in enabled drivers build config 00:03:20.610 net/failsafe: not in enabled drivers build config 00:03:20.610 net/fm10k: not in enabled drivers build config 00:03:20.610 net/gve: not in enabled drivers build config 00:03:20.610 net/hinic: not in enabled drivers build config 00:03:20.610 net/hns3: not in enabled drivers build config 00:03:20.610 net/i40e: not in enabled drivers build config 00:03:20.610 net/iavf: not in enabled drivers build config 00:03:20.610 net/ice: not in enabled drivers build config 00:03:20.610 net/idpf: not in enabled drivers build config 00:03:20.610 net/igc: not in enabled drivers build config 00:03:20.610 net/ionic: not in enabled drivers build config 00:03:20.610 net/ipn3ke: not in enabled drivers build config 00:03:20.610 net/ixgbe: not in enabled drivers build config 00:03:20.610 net/mana: not in enabled drivers build config 00:03:20.610 net/memif: not in enabled drivers build config 00:03:20.610 net/mlx4: not in enabled drivers build config 00:03:20.610 net/mlx5: not in enabled drivers build config 00:03:20.610 net/mvneta: not in enabled drivers build config 00:03:20.610 net/mvpp2: not in enabled drivers build config 00:03:20.610 net/netvsc: not in enabled drivers build config 00:03:20.610 net/nfb: not in enabled drivers build config 00:03:20.610 net/nfp: not in enabled drivers build config 00:03:20.610 net/ngbe: not in enabled drivers build config 00:03:20.610 net/null: not in enabled drivers build config 00:03:20.610 net/octeontx: not in enabled drivers build config 00:03:20.610 net/octeon_ep: not in enabled drivers build config 00:03:20.610 net/pcap: not in enabled drivers build config 00:03:20.610 net/pfe: not in enabled drivers build config 00:03:20.610 net/qede: not in enabled drivers build config 00:03:20.610 net/ring: not in enabled drivers build config 00:03:20.610 net/sfc: not in enabled drivers build config 00:03:20.610 net/softnic: not in enabled drivers build config 00:03:20.610 net/tap: not in enabled drivers build config 00:03:20.610 net/thunderx: not in enabled drivers build config 00:03:20.610 net/txgbe: not in enabled drivers build config 00:03:20.610 net/vdev_netvsc: not in enabled drivers build config 00:03:20.610 net/vhost: not in enabled drivers build config 00:03:20.610 net/virtio: not in enabled drivers build config 00:03:20.610 net/vmxnet3: not in enabled drivers build config 00:03:20.610 raw/*: missing internal dependency, "rawdev" 00:03:20.610 crypto/armv8: not in enabled drivers build config 00:03:20.610 crypto/bcmfs: not in enabled drivers build config 00:03:20.610 crypto/caam_jr: not in enabled drivers build config 00:03:20.610 crypto/ccp: not in enabled drivers build config 00:03:20.610 crypto/cnxk: not in enabled drivers build config 00:03:20.610 crypto/dpaa_sec: not in enabled drivers build config 00:03:20.610 crypto/dpaa2_sec: not in enabled drivers build config 00:03:20.610 crypto/ipsec_mb: not in enabled drivers build config 00:03:20.610 crypto/mlx5: not in enabled drivers build config 00:03:20.610 crypto/mvsam: not in enabled drivers build config 00:03:20.610 crypto/nitrox: not in enabled drivers build config 00:03:20.610 crypto/null: not in enabled drivers build config 00:03:20.610 crypto/octeontx: not in enabled drivers build config 00:03:20.610 crypto/openssl: not in enabled drivers build config 00:03:20.610 crypto/scheduler: not in enabled drivers build config 00:03:20.610 crypto/uadk: not in enabled drivers build config 00:03:20.610 crypto/virtio: not in enabled drivers build config 00:03:20.610 compress/isal: not in enabled drivers build config 00:03:20.610 compress/mlx5: not in enabled drivers build config 00:03:20.610 compress/nitrox: not in enabled drivers build config 00:03:20.610 compress/octeontx: not in enabled drivers build config 00:03:20.610 compress/zlib: not in enabled drivers build config 00:03:20.610 regex/*: missing internal dependency, "regexdev" 00:03:20.610 ml/*: missing internal dependency, "mldev" 00:03:20.610 vdpa/ifc: not in enabled drivers build config 00:03:20.610 vdpa/mlx5: not in enabled drivers build config 00:03:20.610 vdpa/nfp: not in enabled drivers build config 00:03:20.610 vdpa/sfc: not in enabled drivers build config 00:03:20.610 event/*: missing internal dependency, "eventdev" 00:03:20.610 baseband/*: missing internal dependency, "bbdev" 00:03:20.610 gpu/*: missing internal dependency, "gpudev" 00:03:20.610 00:03:20.610 00:03:20.610 Build targets in project: 84 00:03:20.610 00:03:20.610 DPDK 24.03.0 00:03:20.610 00:03:20.610 User defined options 00:03:20.610 buildtype : debug 00:03:20.611 default_library : shared 00:03:20.611 libdir : lib 00:03:20.611 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:20.611 b_sanitize : address 00:03:20.611 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:20.611 c_link_args : 00:03:20.611 cpu_instruction_set: native 00:03:20.611 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:20.611 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:20.611 enable_docs : false 00:03:20.611 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:20.611 enable_kmods : false 00:03:20.611 max_lcores : 128 00:03:20.611 tests : false 00:03:20.611 00:03:20.611 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:21.174 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:21.174 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:21.174 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:21.174 [3/267] Linking static target lib/librte_kvargs.a 00:03:21.174 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:21.174 [5/267] Linking static target lib/librte_log.a 00:03:21.174 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:21.737 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:21.737 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:21.737 [9/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:21.737 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:21.737 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:21.737 [12/267] Linking static target lib/librte_telemetry.a 00:03:21.737 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:21.737 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.737 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:21.737 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:21.737 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:21.994 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:21.994 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:22.250 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:22.250 [21/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.250 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:22.250 [23/267] Linking target lib/librte_log.so.24.1 00:03:22.250 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:22.250 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:22.250 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:22.250 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:22.507 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:22.507 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:22.507 [30/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.507 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:22.507 [32/267] Linking target lib/librte_kvargs.so.24.1 00:03:22.507 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:22.507 [34/267] Linking target lib/librte_telemetry.so.24.1 00:03:22.507 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:22.764 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:22.764 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:22.764 [38/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:22.764 [39/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:22.764 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:22.764 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:22.764 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:22.764 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:22.764 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:23.021 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:23.021 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:23.021 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:23.021 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:23.021 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:23.279 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:23.279 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:23.279 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:23.279 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:23.279 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:23.279 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:23.537 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:23.537 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:23.537 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:23.537 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:23.537 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:23.537 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:23.537 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:23.795 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:23.795 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:23.795 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:23.795 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:23.795 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:24.053 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:24.053 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:24.053 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:24.053 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:24.053 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:24.053 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:24.053 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:24.310 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:24.310 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:24.310 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:24.310 [78/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:24.310 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:24.569 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:24.569 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:24.569 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:24.569 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:24.569 [84/267] Linking static target lib/librte_ring.a 00:03:24.569 [85/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:24.826 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:24.826 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:24.826 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:24.826 [89/267] Linking static target lib/librte_eal.a 00:03:25.083 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:25.083 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:25.083 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:25.083 [93/267] Linking static target lib/librte_mempool.a 00:03:25.083 [94/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.083 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:25.083 [96/267] Linking static target lib/librte_rcu.a 00:03:25.339 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:25.339 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:25.339 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:25.598 [100/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:25.598 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:25.598 [102/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.598 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:25.598 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:25.598 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:25.598 [106/267] Linking static target lib/librte_net.a 00:03:25.868 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:25.868 [108/267] Linking static target lib/librte_meter.a 00:03:25.868 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:25.868 [110/267] Linking static target lib/librte_mbuf.a 00:03:25.868 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:26.126 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:26.126 [113/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.126 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:26.126 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:26.126 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.126 [117/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.383 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:26.383 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:26.642 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:26.900 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.900 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:26.900 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:26.900 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:26.900 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:26.900 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:26.900 [127/267] Linking static target lib/librte_pci.a 00:03:26.900 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:26.900 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:26.900 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:27.158 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:27.158 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:27.158 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:27.158 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:27.158 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:27.158 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:27.158 [137/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.416 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:27.416 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:27.416 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:27.416 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:27.416 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:27.416 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:27.416 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:27.416 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:27.416 [146/267] Linking static target lib/librte_cmdline.a 00:03:27.674 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:27.674 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:27.674 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:27.932 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:27.932 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:27.932 [152/267] Linking static target lib/librte_timer.a 00:03:27.932 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:27.932 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:27.932 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:28.189 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:28.190 [157/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:28.190 [158/267] Linking static target lib/librte_hash.a 00:03:28.190 [159/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:28.448 [160/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:28.448 [161/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.448 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:28.448 [163/267] Linking static target lib/librte_dmadev.a 00:03:28.448 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:28.448 [165/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:28.448 [166/267] Linking static target lib/librte_compressdev.a 00:03:28.448 [167/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:28.448 [168/267] Linking static target lib/librte_ethdev.a 00:03:28.707 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:28.707 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:28.707 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:28.707 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:28.965 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.965 [174/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:28.965 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:28.965 [176/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:29.224 [177/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.224 [178/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.224 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:29.224 [180/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.224 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:29.224 [182/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:29.481 [183/267] Linking static target lib/librte_cryptodev.a 00:03:29.481 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:29.481 [185/267] Linking static target lib/librte_power.a 00:03:29.739 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:29.739 [187/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:29.739 [188/267] Linking static target lib/librte_reorder.a 00:03:29.739 [189/267] Linking static target lib/librte_security.a 00:03:29.739 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:29.739 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:29.996 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:30.254 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:30.254 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.254 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.512 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.770 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:30.770 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:30.770 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:30.770 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:30.770 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:31.029 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:31.029 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:31.029 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:31.288 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:31.288 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:31.288 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:31.288 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:31.288 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:31.546 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:31.546 [211/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:31.546 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:31.546 [213/267] Linking static target drivers/librte_bus_vdev.a 00:03:31.546 [214/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:31.546 [215/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:31.546 [216/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.546 [217/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:31.546 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:31.546 [219/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:31.546 [220/267] Linking static target drivers/librte_bus_pci.a 00:03:31.806 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:31.806 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:31.806 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:31.806 [224/267] Linking static target drivers/librte_mempool_ring.a 00:03:31.806 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.099 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.668 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:33.601 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.601 [229/267] Linking target lib/librte_eal.so.24.1 00:03:33.601 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:33.601 [231/267] Linking target lib/librte_ring.so.24.1 00:03:33.601 [232/267] Linking target lib/librte_meter.so.24.1 00:03:33.601 [233/267] Linking target lib/librte_pci.so.24.1 00:03:33.601 [234/267] Linking target lib/librte_dmadev.so.24.1 00:03:33.601 [235/267] Linking target lib/librte_timer.so.24.1 00:03:33.601 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:33.859 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:33.859 [238/267] Linking target lib/librte_rcu.so.24.1 00:03:33.859 [239/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:33.859 [240/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:33.859 [241/267] Linking target lib/librte_mempool.so.24.1 00:03:33.859 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:33.859 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:33.859 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:33.859 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:33.859 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:33.859 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:33.859 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:34.118 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:34.118 [250/267] Linking target lib/librte_reorder.so.24.1 00:03:34.118 [251/267] Linking target lib/librte_net.so.24.1 00:03:34.118 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:03:34.118 [253/267] Linking target lib/librte_compressdev.so.24.1 00:03:34.118 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:34.118 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:34.375 [256/267] Linking target lib/librte_hash.so.24.1 00:03:34.375 [257/267] Linking target lib/librte_security.so.24.1 00:03:34.375 [258/267] Linking target lib/librte_cmdline.so.24.1 00:03:34.375 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:34.940 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.940 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:34.940 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:35.198 [263/267] Linking target lib/librte_power.so.24.1 00:03:36.129 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:36.129 [265/267] Linking static target lib/librte_vhost.a 00:03:37.503 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.503 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:37.503 INFO: autodetecting backend as ninja 00:03:37.503 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:59.413 CC lib/ut/ut.o 00:03:59.413 CC lib/log/log.o 00:03:59.413 CC lib/log/log_deprecated.o 00:03:59.413 CC lib/log/log_flags.o 00:03:59.413 CC lib/ut_mock/mock.o 00:03:59.413 LIB libspdk_ut.a 00:03:59.413 SO libspdk_ut.so.2.0 00:03:59.413 LIB libspdk_ut_mock.a 00:03:59.413 LIB libspdk_log.a 00:03:59.413 SO libspdk_ut_mock.so.6.0 00:03:59.413 SYMLINK libspdk_ut.so 00:03:59.413 SO libspdk_log.so.7.1 00:03:59.413 SYMLINK libspdk_ut_mock.so 00:03:59.413 SYMLINK libspdk_log.so 00:03:59.413 CC lib/util/base64.o 00:03:59.413 CC lib/ioat/ioat.o 00:03:59.413 CC lib/util/bit_array.o 00:03:59.413 CC lib/dma/dma.o 00:03:59.413 CC lib/util/crc32.o 00:03:59.413 CC lib/util/crc16.o 00:03:59.413 CC lib/util/cpuset.o 00:03:59.413 CC lib/util/crc32c.o 00:03:59.413 CXX lib/trace_parser/trace.o 00:03:59.413 CC lib/vfio_user/host/vfio_user_pci.o 00:03:59.413 CC lib/util/crc32_ieee.o 00:03:59.413 CC lib/util/crc64.o 00:03:59.413 CC lib/util/dif.o 00:03:59.413 LIB libspdk_dma.a 00:03:59.413 CC lib/util/fd.o 00:03:59.413 SO libspdk_dma.so.5.0 00:03:59.413 CC lib/util/fd_group.o 00:03:59.413 CC lib/util/file.o 00:03:59.413 CC lib/util/hexlify.o 00:03:59.413 SYMLINK libspdk_dma.so 00:03:59.413 CC lib/vfio_user/host/vfio_user.o 00:03:59.413 CC lib/util/iov.o 00:03:59.413 LIB libspdk_ioat.a 00:03:59.413 CC lib/util/math.o 00:03:59.413 SO libspdk_ioat.so.7.0 00:03:59.413 CC lib/util/net.o 00:03:59.413 CC lib/util/pipe.o 00:03:59.413 CC lib/util/strerror_tls.o 00:03:59.413 SYMLINK libspdk_ioat.so 00:03:59.413 CC lib/util/string.o 00:03:59.413 CC lib/util/uuid.o 00:03:59.413 LIB libspdk_vfio_user.a 00:03:59.413 CC lib/util/xor.o 00:03:59.413 SO libspdk_vfio_user.so.5.0 00:03:59.413 CC lib/util/zipf.o 00:03:59.413 CC lib/util/md5.o 00:03:59.413 SYMLINK libspdk_vfio_user.so 00:03:59.413 LIB libspdk_util.a 00:03:59.413 SO libspdk_util.so.10.1 00:03:59.413 LIB libspdk_trace_parser.a 00:03:59.413 SO libspdk_trace_parser.so.6.0 00:03:59.413 SYMLINK libspdk_util.so 00:03:59.413 SYMLINK libspdk_trace_parser.so 00:03:59.413 CC lib/conf/conf.o 00:03:59.413 CC lib/idxd/idxd.o 00:03:59.413 CC lib/idxd/idxd_user.o 00:03:59.413 CC lib/idxd/idxd_kernel.o 00:03:59.413 CC lib/json/json_parse.o 00:03:59.413 CC lib/vmd/vmd.o 00:03:59.413 CC lib/rdma_utils/rdma_utils.o 00:03:59.413 CC lib/json/json_util.o 00:03:59.413 CC lib/vmd/led.o 00:03:59.413 CC lib/env_dpdk/env.o 00:03:59.413 CC lib/env_dpdk/memory.o 00:03:59.413 CC lib/json/json_write.o 00:03:59.413 CC lib/env_dpdk/pci.o 00:03:59.413 LIB libspdk_conf.a 00:03:59.413 CC lib/env_dpdk/init.o 00:03:59.413 SO libspdk_conf.so.6.0 00:03:59.413 SYMLINK libspdk_conf.so 00:03:59.413 LIB libspdk_rdma_utils.a 00:03:59.413 CC lib/env_dpdk/threads.o 00:03:59.413 CC lib/env_dpdk/pci_ioat.o 00:03:59.413 SO libspdk_rdma_utils.so.1.0 00:03:59.413 SYMLINK libspdk_rdma_utils.so 00:03:59.413 CC lib/env_dpdk/pci_virtio.o 00:03:59.413 LIB libspdk_json.a 00:03:59.413 CC lib/env_dpdk/pci_vmd.o 00:03:59.413 SO libspdk_json.so.6.0 00:03:59.413 CC lib/env_dpdk/pci_idxd.o 00:03:59.413 SYMLINK libspdk_json.so 00:03:59.413 CC lib/env_dpdk/pci_event.o 00:03:59.413 CC lib/env_dpdk/sigbus_handler.o 00:03:59.413 CC lib/env_dpdk/pci_dpdk.o 00:03:59.413 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:59.414 CC lib/rdma_provider/common.o 00:03:59.414 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:59.672 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:59.672 LIB libspdk_vmd.a 00:03:59.672 LIB libspdk_idxd.a 00:03:59.672 SO libspdk_vmd.so.6.0 00:03:59.672 SO libspdk_idxd.so.12.1 00:03:59.672 SYMLINK libspdk_vmd.so 00:03:59.672 LIB libspdk_rdma_provider.a 00:03:59.672 SYMLINK libspdk_idxd.so 00:03:59.672 SO libspdk_rdma_provider.so.7.0 00:03:59.672 CC lib/jsonrpc/jsonrpc_server.o 00:03:59.672 CC lib/jsonrpc/jsonrpc_client.o 00:03:59.672 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:59.672 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:59.672 SYMLINK libspdk_rdma_provider.so 00:03:59.930 LIB libspdk_jsonrpc.a 00:04:00.188 SO libspdk_jsonrpc.so.6.0 00:04:00.188 SYMLINK libspdk_jsonrpc.so 00:04:00.188 LIB libspdk_env_dpdk.a 00:04:00.447 SO libspdk_env_dpdk.so.15.1 00:04:00.447 CC lib/rpc/rpc.o 00:04:00.447 SYMLINK libspdk_env_dpdk.so 00:04:00.705 LIB libspdk_rpc.a 00:04:00.705 SO libspdk_rpc.so.6.0 00:04:00.705 SYMLINK libspdk_rpc.so 00:04:00.965 CC lib/keyring/keyring_rpc.o 00:04:00.965 CC lib/keyring/keyring.o 00:04:00.965 CC lib/trace/trace.o 00:04:00.965 CC lib/trace/trace_flags.o 00:04:00.965 CC lib/trace/trace_rpc.o 00:04:00.965 CC lib/notify/notify.o 00:04:00.965 CC lib/notify/notify_rpc.o 00:04:00.965 LIB libspdk_notify.a 00:04:00.965 SO libspdk_notify.so.6.0 00:04:00.965 LIB libspdk_trace.a 00:04:00.965 LIB libspdk_keyring.a 00:04:00.965 SYMLINK libspdk_notify.so 00:04:01.224 SO libspdk_trace.so.11.0 00:04:01.224 SO libspdk_keyring.so.2.0 00:04:01.224 SYMLINK libspdk_keyring.so 00:04:01.224 SYMLINK libspdk_trace.so 00:04:01.482 CC lib/thread/thread.o 00:04:01.482 CC lib/thread/iobuf.o 00:04:01.482 CC lib/sock/sock.o 00:04:01.482 CC lib/sock/sock_rpc.o 00:04:01.740 LIB libspdk_sock.a 00:04:01.740 SO libspdk_sock.so.10.0 00:04:01.998 SYMLINK libspdk_sock.so 00:04:02.256 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:02.256 CC lib/nvme/nvme_ctrlr.o 00:04:02.256 CC lib/nvme/nvme_ns.o 00:04:02.256 CC lib/nvme/nvme_fabric.o 00:04:02.256 CC lib/nvme/nvme_ns_cmd.o 00:04:02.256 CC lib/nvme/nvme_qpair.o 00:04:02.256 CC lib/nvme/nvme_pcie.o 00:04:02.256 CC lib/nvme/nvme_pcie_common.o 00:04:02.256 CC lib/nvme/nvme.o 00:04:02.821 CC lib/nvme/nvme_quirks.o 00:04:02.821 CC lib/nvme/nvme_transport.o 00:04:02.821 LIB libspdk_thread.a 00:04:02.821 CC lib/nvme/nvme_discovery.o 00:04:02.821 SO libspdk_thread.so.11.0 00:04:02.821 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:02.821 SYMLINK libspdk_thread.so 00:04:02.821 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:02.821 CC lib/nvme/nvme_tcp.o 00:04:02.821 CC lib/nvme/nvme_opal.o 00:04:03.080 CC lib/nvme/nvme_io_msg.o 00:04:03.080 CC lib/nvme/nvme_poll_group.o 00:04:03.080 CC lib/nvme/nvme_zns.o 00:04:03.080 CC lib/nvme/nvme_stubs.o 00:04:03.338 CC lib/nvme/nvme_auth.o 00:04:03.338 CC lib/nvme/nvme_cuse.o 00:04:03.338 CC lib/nvme/nvme_rdma.o 00:04:03.596 CC lib/accel/accel.o 00:04:03.855 CC lib/init/json_config.o 00:04:03.855 CC lib/blob/blobstore.o 00:04:03.855 CC lib/virtio/virtio.o 00:04:03.855 CC lib/fsdev/fsdev.o 00:04:03.855 CC lib/init/subsystem.o 00:04:03.855 CC lib/fsdev/fsdev_io.o 00:04:04.114 CC lib/fsdev/fsdev_rpc.o 00:04:04.114 CC lib/init/subsystem_rpc.o 00:04:04.114 CC lib/init/rpc.o 00:04:04.114 CC lib/virtio/virtio_vhost_user.o 00:04:04.114 CC lib/blob/request.o 00:04:04.114 CC lib/blob/zeroes.o 00:04:04.114 LIB libspdk_init.a 00:04:04.456 SO libspdk_init.so.6.0 00:04:04.456 SYMLINK libspdk_init.so 00:04:04.456 CC lib/blob/blob_bs_dev.o 00:04:04.456 CC lib/virtio/virtio_vfio_user.o 00:04:04.456 CC lib/accel/accel_rpc.o 00:04:04.456 CC lib/virtio/virtio_pci.o 00:04:04.456 CC lib/accel/accel_sw.o 00:04:04.456 LIB libspdk_nvme.a 00:04:04.713 LIB libspdk_fsdev.a 00:04:04.713 CC lib/event/reactor.o 00:04:04.713 CC lib/event/log_rpc.o 00:04:04.713 CC lib/event/app.o 00:04:04.713 CC lib/event/app_rpc.o 00:04:04.713 SO libspdk_fsdev.so.2.0 00:04:04.713 SYMLINK libspdk_fsdev.so 00:04:04.713 CC lib/event/scheduler_static.o 00:04:04.713 SO libspdk_nvme.so.15.0 00:04:04.713 LIB libspdk_virtio.a 00:04:04.713 LIB libspdk_accel.a 00:04:04.713 SO libspdk_virtio.so.7.0 00:04:04.713 SO libspdk_accel.so.16.0 00:04:04.971 SYMLINK libspdk_virtio.so 00:04:04.971 SYMLINK libspdk_accel.so 00:04:04.971 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:04.971 SYMLINK libspdk_nvme.so 00:04:05.229 CC lib/bdev/bdev.o 00:04:05.229 CC lib/bdev/part.o 00:04:05.229 CC lib/bdev/bdev_rpc.o 00:04:05.229 CC lib/bdev/bdev_zone.o 00:04:05.229 CC lib/bdev/scsi_nvme.o 00:04:05.229 LIB libspdk_event.a 00:04:05.229 SO libspdk_event.so.14.0 00:04:05.229 SYMLINK libspdk_event.so 00:04:05.488 LIB libspdk_fuse_dispatcher.a 00:04:05.488 SO libspdk_fuse_dispatcher.so.1.0 00:04:05.488 SYMLINK libspdk_fuse_dispatcher.so 00:04:06.862 LIB libspdk_blob.a 00:04:06.862 SO libspdk_blob.so.12.0 00:04:06.862 SYMLINK libspdk_blob.so 00:04:07.121 CC lib/lvol/lvol.o 00:04:07.121 CC lib/blobfs/tree.o 00:04:07.121 CC lib/blobfs/blobfs.o 00:04:08.053 LIB libspdk_blobfs.a 00:04:08.054 SO libspdk_blobfs.so.11.0 00:04:08.054 LIB libspdk_lvol.a 00:04:08.054 SYMLINK libspdk_blobfs.so 00:04:08.054 SO libspdk_lvol.so.11.0 00:04:08.054 SYMLINK libspdk_lvol.so 00:04:08.054 LIB libspdk_bdev.a 00:04:08.054 SO libspdk_bdev.so.17.0 00:04:08.311 SYMLINK libspdk_bdev.so 00:04:08.311 CC lib/nvmf/ctrlr_bdev.o 00:04:08.311 CC lib/nvmf/ctrlr.o 00:04:08.311 CC lib/nvmf/ctrlr_discovery.o 00:04:08.311 CC lib/nvmf/subsystem.o 00:04:08.311 CC lib/nvmf/nvmf.o 00:04:08.311 CC lib/nvmf/nvmf_rpc.o 00:04:08.311 CC lib/ftl/ftl_core.o 00:04:08.311 CC lib/nbd/nbd.o 00:04:08.311 CC lib/ublk/ublk.o 00:04:08.311 CC lib/scsi/dev.o 00:04:08.568 CC lib/scsi/lun.o 00:04:08.825 CC lib/nbd/nbd_rpc.o 00:04:08.825 CC lib/ftl/ftl_init.o 00:04:08.825 CC lib/ublk/ublk_rpc.o 00:04:08.825 CC lib/scsi/port.o 00:04:09.083 LIB libspdk_nbd.a 00:04:09.083 SO libspdk_nbd.so.7.0 00:04:09.083 CC lib/ftl/ftl_layout.o 00:04:09.083 CC lib/ftl/ftl_debug.o 00:04:09.083 SYMLINK libspdk_nbd.so 00:04:09.083 CC lib/ftl/ftl_io.o 00:04:09.083 CC lib/ftl/ftl_sb.o 00:04:09.083 CC lib/scsi/scsi.o 00:04:09.083 LIB libspdk_ublk.a 00:04:09.083 SO libspdk_ublk.so.3.0 00:04:09.340 CC lib/scsi/scsi_bdev.o 00:04:09.340 SYMLINK libspdk_ublk.so 00:04:09.340 CC lib/ftl/ftl_l2p.o 00:04:09.340 CC lib/ftl/ftl_l2p_flat.o 00:04:09.340 CC lib/ftl/ftl_nv_cache.o 00:04:09.340 CC lib/nvmf/transport.o 00:04:09.340 CC lib/ftl/ftl_band.o 00:04:09.340 CC lib/ftl/ftl_band_ops.o 00:04:09.340 CC lib/nvmf/tcp.o 00:04:09.340 CC lib/ftl/ftl_writer.o 00:04:09.597 CC lib/ftl/ftl_rq.o 00:04:09.597 CC lib/scsi/scsi_pr.o 00:04:09.597 CC lib/ftl/ftl_reloc.o 00:04:09.597 CC lib/scsi/scsi_rpc.o 00:04:09.597 CC lib/scsi/task.o 00:04:09.855 CC lib/ftl/ftl_l2p_cache.o 00:04:09.855 CC lib/nvmf/stubs.o 00:04:09.855 CC lib/nvmf/mdns_server.o 00:04:09.855 CC lib/nvmf/rdma.o 00:04:09.855 LIB libspdk_scsi.a 00:04:10.112 SO libspdk_scsi.so.9.0 00:04:10.112 CC lib/ftl/ftl_p2l.o 00:04:10.112 CC lib/nvmf/auth.o 00:04:10.112 SYMLINK libspdk_scsi.so 00:04:10.112 CC lib/ftl/ftl_p2l_log.o 00:04:10.112 CC lib/ftl/mngt/ftl_mngt.o 00:04:10.112 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:10.112 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:10.370 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:10.370 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:10.370 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:10.370 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:10.370 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:10.370 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:10.370 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:10.629 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:10.629 CC lib/iscsi/conn.o 00:04:10.629 CC lib/iscsi/init_grp.o 00:04:10.629 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:10.629 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:10.886 CC lib/ftl/utils/ftl_conf.o 00:04:10.886 CC lib/ftl/utils/ftl_md.o 00:04:10.886 CC lib/vhost/vhost.o 00:04:10.886 CC lib/iscsi/iscsi.o 00:04:10.886 CC lib/iscsi/param.o 00:04:10.886 CC lib/iscsi/portal_grp.o 00:04:10.886 CC lib/iscsi/tgt_node.o 00:04:10.886 CC lib/iscsi/iscsi_subsystem.o 00:04:11.144 CC lib/iscsi/iscsi_rpc.o 00:04:11.144 CC lib/vhost/vhost_rpc.o 00:04:11.144 CC lib/iscsi/task.o 00:04:11.144 CC lib/ftl/utils/ftl_mempool.o 00:04:11.144 CC lib/ftl/utils/ftl_bitmap.o 00:04:11.419 CC lib/ftl/utils/ftl_property.o 00:04:11.419 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:11.419 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:11.419 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:11.419 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:11.419 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:11.419 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:11.678 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:11.678 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:11.678 CC lib/vhost/vhost_scsi.o 00:04:11.678 CC lib/vhost/vhost_blk.o 00:04:11.678 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:11.678 CC lib/vhost/rte_vhost_user.o 00:04:11.678 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:11.678 LIB libspdk_nvmf.a 00:04:11.678 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:11.678 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:11.678 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:11.935 SO libspdk_nvmf.so.20.0 00:04:11.935 CC lib/ftl/base/ftl_base_dev.o 00:04:11.935 CC lib/ftl/base/ftl_base_bdev.o 00:04:11.935 CC lib/ftl/ftl_trace.o 00:04:12.193 SYMLINK libspdk_nvmf.so 00:04:12.193 LIB libspdk_ftl.a 00:04:12.193 LIB libspdk_iscsi.a 00:04:12.193 SO libspdk_iscsi.so.8.0 00:04:12.451 SO libspdk_ftl.so.9.0 00:04:12.451 SYMLINK libspdk_iscsi.so 00:04:12.709 LIB libspdk_vhost.a 00:04:12.709 SYMLINK libspdk_ftl.so 00:04:12.709 SO libspdk_vhost.so.8.0 00:04:12.709 SYMLINK libspdk_vhost.so 00:04:12.968 CC module/env_dpdk/env_dpdk_rpc.o 00:04:12.968 CC module/fsdev/aio/fsdev_aio.o 00:04:12.968 CC module/accel/error/accel_error.o 00:04:12.968 CC module/keyring/file/keyring.o 00:04:12.968 CC module/sock/posix/posix.o 00:04:12.968 CC module/accel/ioat/accel_ioat.o 00:04:12.968 CC module/blob/bdev/blob_bdev.o 00:04:12.968 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:12.968 CC module/accel/iaa/accel_iaa.o 00:04:12.968 CC module/accel/dsa/accel_dsa.o 00:04:13.306 LIB libspdk_env_dpdk_rpc.a 00:04:13.306 SO libspdk_env_dpdk_rpc.so.6.0 00:04:13.306 SYMLINK libspdk_env_dpdk_rpc.so 00:04:13.306 CC module/accel/iaa/accel_iaa_rpc.o 00:04:13.306 CC module/keyring/file/keyring_rpc.o 00:04:13.306 CC module/accel/ioat/accel_ioat_rpc.o 00:04:13.306 CC module/accel/error/accel_error_rpc.o 00:04:13.306 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:13.306 LIB libspdk_scheduler_dynamic.a 00:04:13.306 SO libspdk_scheduler_dynamic.so.4.0 00:04:13.306 LIB libspdk_accel_iaa.a 00:04:13.306 SO libspdk_accel_iaa.so.3.0 00:04:13.306 LIB libspdk_accel_ioat.a 00:04:13.306 SYMLINK libspdk_scheduler_dynamic.so 00:04:13.306 LIB libspdk_keyring_file.a 00:04:13.306 SO libspdk_accel_ioat.so.6.0 00:04:13.306 LIB libspdk_blob_bdev.a 00:04:13.306 SO libspdk_keyring_file.so.2.0 00:04:13.306 LIB libspdk_accel_error.a 00:04:13.306 SYMLINK libspdk_accel_iaa.so 00:04:13.306 CC module/accel/dsa/accel_dsa_rpc.o 00:04:13.306 SO libspdk_blob_bdev.so.12.0 00:04:13.306 SO libspdk_accel_error.so.2.0 00:04:13.585 SYMLINK libspdk_keyring_file.so 00:04:13.585 SYMLINK libspdk_accel_ioat.so 00:04:13.585 SYMLINK libspdk_blob_bdev.so 00:04:13.585 CC module/fsdev/aio/linux_aio_mgr.o 00:04:13.585 SYMLINK libspdk_accel_error.so 00:04:13.585 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:13.585 LIB libspdk_accel_dsa.a 00:04:13.585 CC module/scheduler/gscheduler/gscheduler.o 00:04:13.585 SO libspdk_accel_dsa.so.5.0 00:04:13.585 CC module/keyring/linux/keyring.o 00:04:13.585 SYMLINK libspdk_accel_dsa.so 00:04:13.585 CC module/keyring/linux/keyring_rpc.o 00:04:13.585 LIB libspdk_scheduler_dpdk_governor.a 00:04:13.585 CC module/bdev/error/vbdev_error.o 00:04:13.585 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:13.585 CC module/bdev/delay/vbdev_delay.o 00:04:13.585 CC module/blobfs/bdev/blobfs_bdev.o 00:04:13.585 LIB libspdk_scheduler_gscheduler.a 00:04:13.585 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:13.843 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:13.843 SO libspdk_scheduler_gscheduler.so.4.0 00:04:13.843 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:13.843 LIB libspdk_keyring_linux.a 00:04:13.843 CC module/bdev/gpt/gpt.o 00:04:13.843 SYMLINK libspdk_scheduler_gscheduler.so 00:04:13.843 CC module/bdev/gpt/vbdev_gpt.o 00:04:13.843 SO libspdk_keyring_linux.so.1.0 00:04:13.843 LIB libspdk_fsdev_aio.a 00:04:13.843 LIB libspdk_sock_posix.a 00:04:13.843 SO libspdk_fsdev_aio.so.1.0 00:04:13.844 SYMLINK libspdk_keyring_linux.so 00:04:13.844 CC module/bdev/error/vbdev_error_rpc.o 00:04:13.844 SO libspdk_sock_posix.so.6.0 00:04:13.844 SYMLINK libspdk_fsdev_aio.so 00:04:13.844 LIB libspdk_blobfs_bdev.a 00:04:13.844 SO libspdk_blobfs_bdev.so.6.0 00:04:13.844 SYMLINK libspdk_sock_posix.so 00:04:14.102 SYMLINK libspdk_blobfs_bdev.so 00:04:14.102 LIB libspdk_bdev_error.a 00:04:14.102 CC module/bdev/lvol/vbdev_lvol.o 00:04:14.102 CC module/bdev/malloc/bdev_malloc.o 00:04:14.102 SO libspdk_bdev_error.so.6.0 00:04:14.102 CC module/bdev/null/bdev_null.o 00:04:14.102 CC module/bdev/nvme/bdev_nvme.o 00:04:14.102 LIB libspdk_bdev_gpt.a 00:04:14.102 LIB libspdk_bdev_delay.a 00:04:14.102 SYMLINK libspdk_bdev_error.so 00:04:14.102 CC module/bdev/passthru/vbdev_passthru.o 00:04:14.102 CC module/bdev/raid/bdev_raid.o 00:04:14.102 SO libspdk_bdev_delay.so.6.0 00:04:14.102 SO libspdk_bdev_gpt.so.6.0 00:04:14.102 CC module/bdev/split/vbdev_split.o 00:04:14.102 SYMLINK libspdk_bdev_gpt.so 00:04:14.102 SYMLINK libspdk_bdev_delay.so 00:04:14.102 CC module/bdev/null/bdev_null_rpc.o 00:04:14.360 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:14.360 CC module/bdev/xnvme/bdev_xnvme.o 00:04:14.360 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:14.360 LIB libspdk_bdev_null.a 00:04:14.360 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:14.360 SO libspdk_bdev_null.so.6.0 00:04:14.360 CC module/bdev/split/vbdev_split_rpc.o 00:04:14.360 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:14.360 SYMLINK libspdk_bdev_null.so 00:04:14.360 LIB libspdk_bdev_malloc.a 00:04:14.360 LIB libspdk_bdev_passthru.a 00:04:14.360 SO libspdk_bdev_malloc.so.6.0 00:04:14.617 CC module/bdev/aio/bdev_aio.o 00:04:14.617 SO libspdk_bdev_passthru.so.6.0 00:04:14.617 LIB libspdk_bdev_split.a 00:04:14.617 SYMLINK libspdk_bdev_passthru.so 00:04:14.617 SYMLINK libspdk_bdev_malloc.so 00:04:14.617 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:14.617 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:14.617 CC module/bdev/ftl/bdev_ftl.o 00:04:14.617 SO libspdk_bdev_split.so.6.0 00:04:14.617 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:14.617 SYMLINK libspdk_bdev_split.so 00:04:14.617 CC module/bdev/iscsi/bdev_iscsi.o 00:04:14.617 LIB libspdk_bdev_lvol.a 00:04:14.617 LIB libspdk_bdev_xnvme.a 00:04:14.617 LIB libspdk_bdev_zone_block.a 00:04:14.617 SO libspdk_bdev_lvol.so.6.0 00:04:14.617 SO libspdk_bdev_xnvme.so.3.0 00:04:14.874 SO libspdk_bdev_zone_block.so.6.0 00:04:14.874 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:14.874 SYMLINK libspdk_bdev_xnvme.so 00:04:14.874 SYMLINK libspdk_bdev_lvol.so 00:04:14.874 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:14.874 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:14.874 SYMLINK libspdk_bdev_zone_block.so 00:04:14.874 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:14.874 CC module/bdev/aio/bdev_aio_rpc.o 00:04:14.874 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:14.874 CC module/bdev/nvme/nvme_rpc.o 00:04:14.874 LIB libspdk_bdev_aio.a 00:04:14.874 CC module/bdev/nvme/bdev_mdns_client.o 00:04:15.131 SO libspdk_bdev_aio.so.6.0 00:04:15.131 CC module/bdev/raid/bdev_raid_rpc.o 00:04:15.131 LIB libspdk_bdev_iscsi.a 00:04:15.131 LIB libspdk_bdev_ftl.a 00:04:15.131 SYMLINK libspdk_bdev_aio.so 00:04:15.131 CC module/bdev/raid/bdev_raid_sb.o 00:04:15.131 SO libspdk_bdev_iscsi.so.6.0 00:04:15.131 SO libspdk_bdev_ftl.so.6.0 00:04:15.131 CC module/bdev/nvme/vbdev_opal.o 00:04:15.131 CC module/bdev/raid/raid0.o 00:04:15.131 SYMLINK libspdk_bdev_iscsi.so 00:04:15.131 SYMLINK libspdk_bdev_ftl.so 00:04:15.131 CC module/bdev/raid/raid1.o 00:04:15.131 CC module/bdev/raid/concat.o 00:04:15.131 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:15.131 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:15.131 LIB libspdk_bdev_virtio.a 00:04:15.131 SO libspdk_bdev_virtio.so.6.0 00:04:15.388 SYMLINK libspdk_bdev_virtio.so 00:04:15.388 LIB libspdk_bdev_raid.a 00:04:15.388 SO libspdk_bdev_raid.so.6.0 00:04:15.645 SYMLINK libspdk_bdev_raid.so 00:04:16.578 LIB libspdk_bdev_nvme.a 00:04:16.578 SO libspdk_bdev_nvme.so.7.1 00:04:16.578 SYMLINK libspdk_bdev_nvme.so 00:04:17.142 CC module/event/subsystems/fsdev/fsdev.o 00:04:17.142 CC module/event/subsystems/vmd/vmd.o 00:04:17.142 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:17.142 CC module/event/subsystems/keyring/keyring.o 00:04:17.142 CC module/event/subsystems/sock/sock.o 00:04:17.142 CC module/event/subsystems/iobuf/iobuf.o 00:04:17.142 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:17.142 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:17.142 CC module/event/subsystems/scheduler/scheduler.o 00:04:17.142 LIB libspdk_event_keyring.a 00:04:17.142 LIB libspdk_event_scheduler.a 00:04:17.142 LIB libspdk_event_fsdev.a 00:04:17.142 LIB libspdk_event_sock.a 00:04:17.142 LIB libspdk_event_vhost_blk.a 00:04:17.142 SO libspdk_event_keyring.so.1.0 00:04:17.142 LIB libspdk_event_iobuf.a 00:04:17.142 SO libspdk_event_scheduler.so.4.0 00:04:17.142 SO libspdk_event_sock.so.5.0 00:04:17.142 SO libspdk_event_fsdev.so.1.0 00:04:17.142 SO libspdk_event_vhost_blk.so.3.0 00:04:17.142 SO libspdk_event_iobuf.so.3.0 00:04:17.142 LIB libspdk_event_vmd.a 00:04:17.142 SYMLINK libspdk_event_keyring.so 00:04:17.142 SYMLINK libspdk_event_scheduler.so 00:04:17.142 SYMLINK libspdk_event_fsdev.so 00:04:17.142 SYMLINK libspdk_event_vhost_blk.so 00:04:17.142 SYMLINK libspdk_event_sock.so 00:04:17.400 SO libspdk_event_vmd.so.6.0 00:04:17.400 SYMLINK libspdk_event_iobuf.so 00:04:17.400 SYMLINK libspdk_event_vmd.so 00:04:17.400 CC module/event/subsystems/accel/accel.o 00:04:17.657 LIB libspdk_event_accel.a 00:04:17.657 SO libspdk_event_accel.so.6.0 00:04:17.657 SYMLINK libspdk_event_accel.so 00:04:17.915 CC module/event/subsystems/bdev/bdev.o 00:04:18.261 LIB libspdk_event_bdev.a 00:04:18.261 SO libspdk_event_bdev.so.6.0 00:04:18.261 SYMLINK libspdk_event_bdev.so 00:04:18.261 CC module/event/subsystems/scsi/scsi.o 00:04:18.261 CC module/event/subsystems/ublk/ublk.o 00:04:18.261 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:18.261 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:18.261 CC module/event/subsystems/nbd/nbd.o 00:04:18.520 LIB libspdk_event_nbd.a 00:04:18.520 LIB libspdk_event_ublk.a 00:04:18.520 LIB libspdk_event_scsi.a 00:04:18.520 SO libspdk_event_nbd.so.6.0 00:04:18.520 SO libspdk_event_ublk.so.3.0 00:04:18.520 SO libspdk_event_scsi.so.6.0 00:04:18.520 SYMLINK libspdk_event_ublk.so 00:04:18.520 SYMLINK libspdk_event_scsi.so 00:04:18.520 SYMLINK libspdk_event_nbd.so 00:04:18.520 LIB libspdk_event_nvmf.a 00:04:18.520 SO libspdk_event_nvmf.so.6.0 00:04:18.777 SYMLINK libspdk_event_nvmf.so 00:04:18.777 CC module/event/subsystems/iscsi/iscsi.o 00:04:18.777 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:18.777 LIB libspdk_event_iscsi.a 00:04:18.777 LIB libspdk_event_vhost_scsi.a 00:04:18.777 SO libspdk_event_iscsi.so.6.0 00:04:19.035 SO libspdk_event_vhost_scsi.so.3.0 00:04:19.035 SYMLINK libspdk_event_iscsi.so 00:04:19.035 SYMLINK libspdk_event_vhost_scsi.so 00:04:19.035 SO libspdk.so.6.0 00:04:19.035 SYMLINK libspdk.so 00:04:19.295 CXX app/trace/trace.o 00:04:19.295 TEST_HEADER include/spdk/accel.h 00:04:19.295 TEST_HEADER include/spdk/accel_module.h 00:04:19.295 TEST_HEADER include/spdk/assert.h 00:04:19.295 CC test/rpc_client/rpc_client_test.o 00:04:19.295 TEST_HEADER include/spdk/barrier.h 00:04:19.295 TEST_HEADER include/spdk/base64.h 00:04:19.295 TEST_HEADER include/spdk/bdev.h 00:04:19.295 TEST_HEADER include/spdk/bdev_module.h 00:04:19.295 TEST_HEADER include/spdk/bdev_zone.h 00:04:19.295 TEST_HEADER include/spdk/bit_array.h 00:04:19.295 TEST_HEADER include/spdk/bit_pool.h 00:04:19.295 TEST_HEADER include/spdk/blob_bdev.h 00:04:19.295 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:19.295 TEST_HEADER include/spdk/blobfs.h 00:04:19.295 TEST_HEADER include/spdk/blob.h 00:04:19.295 TEST_HEADER include/spdk/conf.h 00:04:19.295 TEST_HEADER include/spdk/config.h 00:04:19.295 TEST_HEADER include/spdk/cpuset.h 00:04:19.295 TEST_HEADER include/spdk/crc16.h 00:04:19.295 TEST_HEADER include/spdk/crc32.h 00:04:19.295 TEST_HEADER include/spdk/crc64.h 00:04:19.295 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:19.295 TEST_HEADER include/spdk/dif.h 00:04:19.295 TEST_HEADER include/spdk/dma.h 00:04:19.295 TEST_HEADER include/spdk/endian.h 00:04:19.295 TEST_HEADER include/spdk/env_dpdk.h 00:04:19.295 TEST_HEADER include/spdk/env.h 00:04:19.295 TEST_HEADER include/spdk/event.h 00:04:19.295 TEST_HEADER include/spdk/fd_group.h 00:04:19.295 TEST_HEADER include/spdk/fd.h 00:04:19.295 TEST_HEADER include/spdk/file.h 00:04:19.295 TEST_HEADER include/spdk/fsdev.h 00:04:19.295 TEST_HEADER include/spdk/fsdev_module.h 00:04:19.295 CC examples/ioat/perf/perf.o 00:04:19.295 CC examples/util/zipf/zipf.o 00:04:19.295 TEST_HEADER include/spdk/ftl.h 00:04:19.295 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:19.295 TEST_HEADER include/spdk/gpt_spec.h 00:04:19.295 TEST_HEADER include/spdk/hexlify.h 00:04:19.295 TEST_HEADER include/spdk/histogram_data.h 00:04:19.295 CC test/thread/poller_perf/poller_perf.o 00:04:19.295 TEST_HEADER include/spdk/idxd.h 00:04:19.295 TEST_HEADER include/spdk/idxd_spec.h 00:04:19.295 TEST_HEADER include/spdk/init.h 00:04:19.295 TEST_HEADER include/spdk/ioat.h 00:04:19.295 TEST_HEADER include/spdk/ioat_spec.h 00:04:19.295 TEST_HEADER include/spdk/iscsi_spec.h 00:04:19.295 TEST_HEADER include/spdk/json.h 00:04:19.295 TEST_HEADER include/spdk/jsonrpc.h 00:04:19.295 TEST_HEADER include/spdk/keyring.h 00:04:19.295 TEST_HEADER include/spdk/keyring_module.h 00:04:19.295 TEST_HEADER include/spdk/likely.h 00:04:19.295 TEST_HEADER include/spdk/log.h 00:04:19.295 TEST_HEADER include/spdk/lvol.h 00:04:19.295 TEST_HEADER include/spdk/md5.h 00:04:19.295 TEST_HEADER include/spdk/memory.h 00:04:19.295 TEST_HEADER include/spdk/mmio.h 00:04:19.295 TEST_HEADER include/spdk/nbd.h 00:04:19.295 TEST_HEADER include/spdk/net.h 00:04:19.295 TEST_HEADER include/spdk/notify.h 00:04:19.295 CC test/dma/test_dma/test_dma.o 00:04:19.295 TEST_HEADER include/spdk/nvme.h 00:04:19.295 TEST_HEADER include/spdk/nvme_intel.h 00:04:19.295 CC test/app/bdev_svc/bdev_svc.o 00:04:19.295 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:19.295 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:19.295 TEST_HEADER include/spdk/nvme_spec.h 00:04:19.295 TEST_HEADER include/spdk/nvme_zns.h 00:04:19.295 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:19.295 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:19.295 TEST_HEADER include/spdk/nvmf.h 00:04:19.295 TEST_HEADER include/spdk/nvmf_spec.h 00:04:19.295 TEST_HEADER include/spdk/nvmf_transport.h 00:04:19.295 TEST_HEADER include/spdk/opal.h 00:04:19.295 TEST_HEADER include/spdk/opal_spec.h 00:04:19.295 TEST_HEADER include/spdk/pci_ids.h 00:04:19.295 TEST_HEADER include/spdk/pipe.h 00:04:19.554 TEST_HEADER include/spdk/queue.h 00:04:19.554 TEST_HEADER include/spdk/reduce.h 00:04:19.554 TEST_HEADER include/spdk/rpc.h 00:04:19.554 TEST_HEADER include/spdk/scheduler.h 00:04:19.554 TEST_HEADER include/spdk/scsi.h 00:04:19.554 TEST_HEADER include/spdk/scsi_spec.h 00:04:19.554 TEST_HEADER include/spdk/sock.h 00:04:19.554 TEST_HEADER include/spdk/stdinc.h 00:04:19.554 TEST_HEADER include/spdk/string.h 00:04:19.554 CC test/env/mem_callbacks/mem_callbacks.o 00:04:19.554 TEST_HEADER include/spdk/thread.h 00:04:19.554 TEST_HEADER include/spdk/trace.h 00:04:19.554 TEST_HEADER include/spdk/trace_parser.h 00:04:19.554 TEST_HEADER include/spdk/tree.h 00:04:19.554 TEST_HEADER include/spdk/ublk.h 00:04:19.554 TEST_HEADER include/spdk/util.h 00:04:19.554 TEST_HEADER include/spdk/uuid.h 00:04:19.554 TEST_HEADER include/spdk/version.h 00:04:19.554 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:19.554 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:19.554 TEST_HEADER include/spdk/vhost.h 00:04:19.554 TEST_HEADER include/spdk/vmd.h 00:04:19.554 TEST_HEADER include/spdk/xor.h 00:04:19.554 LINK rpc_client_test 00:04:19.554 TEST_HEADER include/spdk/zipf.h 00:04:19.554 CXX test/cpp_headers/accel.o 00:04:19.554 LINK zipf 00:04:19.554 LINK poller_perf 00:04:19.554 LINK interrupt_tgt 00:04:19.554 LINK bdev_svc 00:04:19.554 LINK ioat_perf 00:04:19.554 CXX test/cpp_headers/accel_module.o 00:04:19.554 CXX test/cpp_headers/assert.o 00:04:19.811 LINK spdk_trace 00:04:19.811 CC test/app/histogram_perf/histogram_perf.o 00:04:19.811 CXX test/cpp_headers/barrier.o 00:04:19.811 CC app/trace_record/trace_record.o 00:04:19.811 CC examples/ioat/verify/verify.o 00:04:19.811 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:19.811 CC app/nvmf_tgt/nvmf_main.o 00:04:19.811 LINK histogram_perf 00:04:19.811 CC app/iscsi_tgt/iscsi_tgt.o 00:04:19.811 LINK test_dma 00:04:19.811 CXX test/cpp_headers/base64.o 00:04:19.811 CC test/env/vtophys/vtophys.o 00:04:20.069 LINK mem_callbacks 00:04:20.069 LINK verify 00:04:20.069 LINK spdk_trace_record 00:04:20.069 LINK nvmf_tgt 00:04:20.069 LINK vtophys 00:04:20.069 CXX test/cpp_headers/bdev.o 00:04:20.069 LINK iscsi_tgt 00:04:20.069 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:20.069 CC test/event/event_perf/event_perf.o 00:04:20.069 CXX test/cpp_headers/bdev_module.o 00:04:20.069 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:20.326 LINK nvme_fuzz 00:04:20.326 CC test/app/jsoncat/jsoncat.o 00:04:20.326 CC test/env/memory/memory_ut.o 00:04:20.326 CC examples/thread/thread/thread_ex.o 00:04:20.326 CC test/accel/dif/dif.o 00:04:20.326 LINK env_dpdk_post_init 00:04:20.326 CXX test/cpp_headers/bdev_zone.o 00:04:20.326 LINK event_perf 00:04:20.326 CXX test/cpp_headers/bit_array.o 00:04:20.326 CC app/spdk_tgt/spdk_tgt.o 00:04:20.326 LINK jsoncat 00:04:20.584 CC test/event/reactor/reactor.o 00:04:20.584 CXX test/cpp_headers/bit_pool.o 00:04:20.584 LINK spdk_tgt 00:04:20.584 CC test/event/reactor_perf/reactor_perf.o 00:04:20.584 CC test/env/pci/pci_ut.o 00:04:20.584 LINK thread 00:04:20.584 CXX test/cpp_headers/blob_bdev.o 00:04:20.584 LINK reactor 00:04:20.584 CC examples/sock/hello_world/hello_sock.o 00:04:20.584 LINK reactor_perf 00:04:20.843 CXX test/cpp_headers/blobfs_bdev.o 00:04:20.843 CC app/spdk_lspci/spdk_lspci.o 00:04:20.843 CC test/event/app_repeat/app_repeat.o 00:04:20.843 LINK pci_ut 00:04:20.843 CXX test/cpp_headers/blobfs.o 00:04:20.843 LINK hello_sock 00:04:20.843 LINK spdk_lspci 00:04:20.843 CC test/event/scheduler/scheduler.o 00:04:20.843 CC examples/vmd/lsvmd/lsvmd.o 00:04:21.101 CXX test/cpp_headers/blob.o 00:04:21.101 LINK app_repeat 00:04:21.101 CXX test/cpp_headers/conf.o 00:04:21.101 LINK dif 00:04:21.101 LINK lsvmd 00:04:21.101 LINK scheduler 00:04:21.101 CC app/spdk_nvme_identify/identify.o 00:04:21.101 CC app/spdk_nvme_perf/perf.o 00:04:21.101 CXX test/cpp_headers/config.o 00:04:21.101 CXX test/cpp_headers/cpuset.o 00:04:21.360 CC app/spdk_nvme_discover/discovery_aer.o 00:04:21.360 CC examples/vmd/led/led.o 00:04:21.360 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:21.360 CXX test/cpp_headers/crc16.o 00:04:21.360 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:21.360 CC examples/idxd/perf/perf.o 00:04:21.360 LINK led 00:04:21.360 LINK memory_ut 00:04:21.360 LINK spdk_nvme_discover 00:04:21.360 CXX test/cpp_headers/crc32.o 00:04:21.618 CC test/blobfs/mkfs/mkfs.o 00:04:21.618 CXX test/cpp_headers/crc64.o 00:04:21.618 CC examples/accel/perf/accel_perf.o 00:04:21.618 LINK vhost_fuzz 00:04:21.618 LINK idxd_perf 00:04:21.618 LINK mkfs 00:04:21.618 CC test/lvol/esnap/esnap.o 00:04:21.876 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:21.876 CXX test/cpp_headers/dif.o 00:04:21.876 CXX test/cpp_headers/dma.o 00:04:21.876 CXX test/cpp_headers/endian.o 00:04:21.876 LINK spdk_nvme_perf 00:04:21.876 CC test/app/stub/stub.o 00:04:21.876 LINK iscsi_fuzz 00:04:21.876 LINK spdk_nvme_identify 00:04:22.160 LINK hello_fsdev 00:04:22.160 CXX test/cpp_headers/env_dpdk.o 00:04:22.160 LINK stub 00:04:22.160 CC examples/blob/cli/blobcli.o 00:04:22.160 CC examples/blob/hello_world/hello_blob.o 00:04:22.160 CXX test/cpp_headers/env.o 00:04:22.160 CC app/spdk_top/spdk_top.o 00:04:22.160 CC test/nvme/aer/aer.o 00:04:22.160 CXX test/cpp_headers/event.o 00:04:22.160 LINK accel_perf 00:04:22.417 CXX test/cpp_headers/fd_group.o 00:04:22.417 CC examples/nvme/hello_world/hello_world.o 00:04:22.417 LINK hello_blob 00:04:22.417 CC test/bdev/bdevio/bdevio.o 00:04:22.417 CXX test/cpp_headers/fd.o 00:04:22.417 CC examples/nvme/reconnect/reconnect.o 00:04:22.417 LINK aer 00:04:22.417 CXX test/cpp_headers/file.o 00:04:22.674 CC test/nvme/reset/reset.o 00:04:22.674 LINK hello_world 00:04:22.674 CC test/nvme/sgl/sgl.o 00:04:22.674 LINK blobcli 00:04:22.674 CXX test/cpp_headers/fsdev.o 00:04:22.674 LINK bdevio 00:04:22.674 CC test/nvme/e2edp/nvme_dp.o 00:04:22.931 CXX test/cpp_headers/fsdev_module.o 00:04:22.931 LINK reconnect 00:04:22.931 LINK reset 00:04:22.931 CC app/vhost/vhost.o 00:04:22.931 LINK sgl 00:04:22.931 CC app/spdk_dd/spdk_dd.o 00:04:22.931 CXX test/cpp_headers/ftl.o 00:04:22.931 CXX test/cpp_headers/fuse_dispatcher.o 00:04:22.931 LINK nvme_dp 00:04:22.931 CC test/nvme/overhead/overhead.o 00:04:22.931 LINK vhost 00:04:22.931 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:22.931 CC app/fio/nvme/fio_plugin.o 00:04:23.188 LINK spdk_top 00:04:23.188 CXX test/cpp_headers/gpt_spec.o 00:04:23.188 CC test/nvme/err_injection/err_injection.o 00:04:23.188 CXX test/cpp_headers/hexlify.o 00:04:23.188 LINK spdk_dd 00:04:23.188 CXX test/cpp_headers/histogram_data.o 00:04:23.188 LINK overhead 00:04:23.188 CC examples/bdev/hello_world/hello_bdev.o 00:04:23.447 CC test/nvme/startup/startup.o 00:04:23.447 LINK err_injection 00:04:23.447 CC test/nvme/reserve/reserve.o 00:04:23.447 CXX test/cpp_headers/idxd.o 00:04:23.447 LINK startup 00:04:23.447 CC examples/nvme/arbitration/arbitration.o 00:04:23.447 CC test/nvme/simple_copy/simple_copy.o 00:04:23.447 LINK hello_bdev 00:04:23.447 LINK nvme_manage 00:04:23.705 CXX test/cpp_headers/idxd_spec.o 00:04:23.705 CC examples/nvme/hotplug/hotplug.o 00:04:23.705 LINK reserve 00:04:23.705 CXX test/cpp_headers/init.o 00:04:23.705 LINK spdk_nvme 00:04:23.705 CXX test/cpp_headers/ioat.o 00:04:23.705 CC test/nvme/connect_stress/connect_stress.o 00:04:23.705 LINK simple_copy 00:04:23.705 CXX test/cpp_headers/ioat_spec.o 00:04:23.705 CC examples/bdev/bdevperf/bdevperf.o 00:04:23.705 LINK hotplug 00:04:23.964 CC test/nvme/boot_partition/boot_partition.o 00:04:23.964 LINK arbitration 00:04:23.964 CC test/nvme/compliance/nvme_compliance.o 00:04:23.964 LINK connect_stress 00:04:23.964 CC app/fio/bdev/fio_plugin.o 00:04:23.964 CXX test/cpp_headers/iscsi_spec.o 00:04:23.964 CC test/nvme/fused_ordering/fused_ordering.o 00:04:23.964 LINK boot_partition 00:04:23.964 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:23.964 CC examples/nvme/abort/abort.o 00:04:24.223 CXX test/cpp_headers/json.o 00:04:24.223 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:24.223 LINK cmb_copy 00:04:24.223 LINK fused_ordering 00:04:24.223 CC test/nvme/fdp/fdp.o 00:04:24.223 CXX test/cpp_headers/jsonrpc.o 00:04:24.223 LINK nvme_compliance 00:04:24.223 CXX test/cpp_headers/keyring.o 00:04:24.479 LINK doorbell_aers 00:04:24.479 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:24.479 CXX test/cpp_headers/keyring_module.o 00:04:24.479 LINK spdk_bdev 00:04:24.479 LINK abort 00:04:24.479 CXX test/cpp_headers/likely.o 00:04:24.479 CXX test/cpp_headers/log.o 00:04:24.479 CC test/nvme/cuse/cuse.o 00:04:24.479 LINK pmr_persistence 00:04:24.479 CXX test/cpp_headers/lvol.o 00:04:24.479 LINK fdp 00:04:24.479 CXX test/cpp_headers/md5.o 00:04:24.479 CXX test/cpp_headers/memory.o 00:04:24.763 CXX test/cpp_headers/mmio.o 00:04:24.763 CXX test/cpp_headers/nbd.o 00:04:24.763 CXX test/cpp_headers/net.o 00:04:24.763 CXX test/cpp_headers/notify.o 00:04:24.763 CXX test/cpp_headers/nvme.o 00:04:24.763 CXX test/cpp_headers/nvme_intel.o 00:04:24.763 CXX test/cpp_headers/nvme_ocssd.o 00:04:24.763 LINK bdevperf 00:04:24.763 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:24.763 CXX test/cpp_headers/nvme_spec.o 00:04:24.763 CXX test/cpp_headers/nvme_zns.o 00:04:24.763 CXX test/cpp_headers/nvmf_cmd.o 00:04:24.763 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:24.763 CXX test/cpp_headers/nvmf.o 00:04:24.763 CXX test/cpp_headers/nvmf_spec.o 00:04:25.033 CXX test/cpp_headers/nvmf_transport.o 00:04:25.033 CXX test/cpp_headers/opal.o 00:04:25.033 CXX test/cpp_headers/opal_spec.o 00:04:25.033 CXX test/cpp_headers/pci_ids.o 00:04:25.033 CXX test/cpp_headers/pipe.o 00:04:25.033 CXX test/cpp_headers/queue.o 00:04:25.033 CXX test/cpp_headers/reduce.o 00:04:25.033 CXX test/cpp_headers/rpc.o 00:04:25.033 CC examples/nvmf/nvmf/nvmf.o 00:04:25.033 CXX test/cpp_headers/scheduler.o 00:04:25.033 CXX test/cpp_headers/scsi.o 00:04:25.033 CXX test/cpp_headers/scsi_spec.o 00:04:25.033 CXX test/cpp_headers/sock.o 00:04:25.033 CXX test/cpp_headers/stdinc.o 00:04:25.292 CXX test/cpp_headers/string.o 00:04:25.292 CXX test/cpp_headers/thread.o 00:04:25.292 CXX test/cpp_headers/trace.o 00:04:25.292 CXX test/cpp_headers/trace_parser.o 00:04:25.292 CXX test/cpp_headers/tree.o 00:04:25.292 CXX test/cpp_headers/ublk.o 00:04:25.292 CXX test/cpp_headers/util.o 00:04:25.292 CXX test/cpp_headers/uuid.o 00:04:25.292 CXX test/cpp_headers/version.o 00:04:25.292 CXX test/cpp_headers/vfio_user_pci.o 00:04:25.292 CXX test/cpp_headers/vfio_user_spec.o 00:04:25.292 LINK nvmf 00:04:25.550 CXX test/cpp_headers/vhost.o 00:04:25.550 CXX test/cpp_headers/vmd.o 00:04:25.550 CXX test/cpp_headers/xor.o 00:04:25.550 CXX test/cpp_headers/zipf.o 00:04:25.808 LINK cuse 00:04:27.184 LINK esnap 00:04:27.443 00:04:27.443 real 1m17.664s 00:04:27.443 user 6m56.365s 00:04:27.443 sys 1m17.053s 00:04:27.443 12:07:58 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:27.443 12:07:58 make -- common/autotest_common.sh@10 -- $ set +x 00:04:27.443 ************************************ 00:04:27.443 END TEST make 00:04:27.443 ************************************ 00:04:27.443 12:07:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:27.443 12:07:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:27.443 12:07:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:27.443 12:07:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.443 12:07:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:27.443 12:07:58 -- pm/common@44 -- $ pid=5063 00:04:27.443 12:07:58 -- pm/common@50 -- $ kill -TERM 5063 00:04:27.443 12:07:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.443 12:07:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:27.443 12:07:58 -- pm/common@44 -- $ pid=5064 00:04:27.443 12:07:58 -- pm/common@50 -- $ kill -TERM 5064 00:04:27.443 12:07:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:27.443 12:07:58 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:27.443 12:07:58 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:27.443 12:07:58 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:27.443 12:07:58 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:27.701 12:07:58 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:27.701 12:07:58 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.701 12:07:58 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.701 12:07:58 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.701 12:07:58 -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.701 12:07:58 -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.701 12:07:58 -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.701 12:07:58 -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.701 12:07:58 -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.701 12:07:58 -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.701 12:07:58 -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.701 12:07:58 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.701 12:07:58 -- scripts/common.sh@344 -- # case "$op" in 00:04:27.701 12:07:58 -- scripts/common.sh@345 -- # : 1 00:04:27.701 12:07:58 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.701 12:07:58 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.701 12:07:58 -- scripts/common.sh@365 -- # decimal 1 00:04:27.701 12:07:58 -- scripts/common.sh@353 -- # local d=1 00:04:27.701 12:07:58 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.701 12:07:58 -- scripts/common.sh@355 -- # echo 1 00:04:27.701 12:07:58 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.701 12:07:58 -- scripts/common.sh@366 -- # decimal 2 00:04:27.701 12:07:58 -- scripts/common.sh@353 -- # local d=2 00:04:27.701 12:07:58 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.701 12:07:58 -- scripts/common.sh@355 -- # echo 2 00:04:27.701 12:07:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.701 12:07:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.701 12:07:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.701 12:07:58 -- scripts/common.sh@368 -- # return 0 00:04:27.701 12:07:58 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.701 12:07:58 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:27.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.701 --rc genhtml_branch_coverage=1 00:04:27.701 --rc genhtml_function_coverage=1 00:04:27.701 --rc genhtml_legend=1 00:04:27.701 --rc geninfo_all_blocks=1 00:04:27.701 --rc geninfo_unexecuted_blocks=1 00:04:27.701 00:04:27.701 ' 00:04:27.701 12:07:58 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:27.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.701 --rc genhtml_branch_coverage=1 00:04:27.701 --rc genhtml_function_coverage=1 00:04:27.701 --rc genhtml_legend=1 00:04:27.701 --rc geninfo_all_blocks=1 00:04:27.701 --rc geninfo_unexecuted_blocks=1 00:04:27.701 00:04:27.701 ' 00:04:27.701 12:07:58 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:27.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.701 --rc genhtml_branch_coverage=1 00:04:27.701 --rc genhtml_function_coverage=1 00:04:27.701 --rc genhtml_legend=1 00:04:27.701 --rc geninfo_all_blocks=1 00:04:27.701 --rc geninfo_unexecuted_blocks=1 00:04:27.701 00:04:27.701 ' 00:04:27.701 12:07:58 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:27.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.701 --rc genhtml_branch_coverage=1 00:04:27.701 --rc genhtml_function_coverage=1 00:04:27.701 --rc genhtml_legend=1 00:04:27.701 --rc geninfo_all_blocks=1 00:04:27.701 --rc geninfo_unexecuted_blocks=1 00:04:27.701 00:04:27.701 ' 00:04:27.701 12:07:58 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:27.701 12:07:58 -- nvmf/common.sh@7 -- # uname -s 00:04:27.701 12:07:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.701 12:07:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.701 12:07:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.701 12:07:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.701 12:07:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.701 12:07:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.701 12:07:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.701 12:07:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.701 12:07:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.701 12:07:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.701 12:07:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3fadf30e-042d-4555-8c89-3612ece365ef 00:04:27.701 12:07:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=3fadf30e-042d-4555-8c89-3612ece365ef 00:04:27.701 12:07:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.701 12:07:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.701 12:07:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.701 12:07:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.701 12:07:58 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.701 12:07:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:27.701 12:07:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.701 12:07:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.701 12:07:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.701 12:07:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.702 12:07:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.702 12:07:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.702 12:07:58 -- paths/export.sh@5 -- # export PATH 00:04:27.702 12:07:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.702 12:07:58 -- nvmf/common.sh@51 -- # : 0 00:04:27.702 12:07:58 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:27.702 12:07:58 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:27.702 12:07:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.702 12:07:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.702 12:07:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.702 12:07:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:27.702 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:27.702 12:07:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:27.702 12:07:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:27.702 12:07:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:27.702 12:07:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:27.702 12:07:58 -- spdk/autotest.sh@32 -- # uname -s 00:04:27.702 12:07:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:27.702 12:07:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:27.702 12:07:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:27.702 12:07:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:27.702 12:07:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:27.702 12:07:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:27.702 12:07:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:27.702 12:07:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:27.702 12:07:58 -- spdk/autotest.sh@48 -- # udevadm_pid=54382 00:04:27.702 12:07:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:27.702 12:07:58 -- pm/common@17 -- # local monitor 00:04:27.702 12:07:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:27.702 12:07:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.702 12:07:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:27.702 12:07:58 -- pm/common@25 -- # sleep 1 00:04:27.702 12:07:58 -- pm/common@21 -- # date +%s 00:04:27.702 12:07:58 -- pm/common@21 -- # date +%s 00:04:27.702 12:07:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733400478 00:04:27.702 12:07:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733400478 00:04:27.702 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733400478_collect-vmstat.pm.log 00:04:27.702 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733400478_collect-cpu-load.pm.log 00:04:28.637 12:07:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:28.637 12:07:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:28.637 12:07:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.637 12:07:59 -- common/autotest_common.sh@10 -- # set +x 00:04:28.637 12:07:59 -- spdk/autotest.sh@59 -- # create_test_list 00:04:28.637 12:07:59 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:28.637 12:07:59 -- common/autotest_common.sh@10 -- # set +x 00:04:28.895 12:07:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:28.895 12:07:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:28.895 12:07:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:28.895 12:07:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:28.895 12:07:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:28.895 12:07:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:28.895 12:07:59 -- common/autotest_common.sh@1457 -- # uname 00:04:28.895 12:07:59 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:28.895 12:07:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:28.895 12:07:59 -- common/autotest_common.sh@1477 -- # uname 00:04:28.895 12:07:59 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:28.895 12:07:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:28.895 12:07:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:28.895 lcov: LCOV version 1.15 00:04:28.895 12:07:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:43.811 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:43.811 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:01.942 12:08:30 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:01.942 12:08:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.942 12:08:30 -- common/autotest_common.sh@10 -- # set +x 00:05:01.942 12:08:30 -- spdk/autotest.sh@78 -- # rm -f 00:05:01.942 12:08:30 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:01.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.942 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:01.942 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:01.942 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:01.942 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:01.942 12:08:30 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:01.942 12:08:30 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:01.942 12:08:30 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:01.942 12:08:30 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:01.942 12:08:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:01.942 12:08:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:01.942 12:08:30 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:01.942 12:08:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:01.942 12:08:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:01.942 12:08:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:01.942 12:08:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:01.942 12:08:30 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:01.942 12:08:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:01.942 12:08:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:01.942 12:08:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:01.942 12:08:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:01.942 12:08:30 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:01.942 12:08:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:01.942 12:08:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:01.942 12:08:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:01.942 12:08:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:05:01.942 12:08:30 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:01.942 12:08:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:01.942 12:08:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:01.942 12:08:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:01.942 12:08:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:05:01.942 12:08:30 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:01.942 12:08:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:01.942 12:08:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:01.942 12:08:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:01.942 12:08:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:01.942 12:08:30 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:01.942 12:08:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:01.942 12:08:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:01.942 12:08:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:01.942 12:08:31 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:01.942 12:08:31 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:01.942 12:08:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:01.942 12:08:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:01.942 12:08:31 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:01.942 12:08:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.942 12:08:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.942 12:08:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:01.942 12:08:31 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:01.942 12:08:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:01.942 No valid GPT data, bailing 00:05:01.942 12:08:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:01.942 12:08:31 -- scripts/common.sh@394 -- # pt= 00:05:01.942 12:08:31 -- scripts/common.sh@395 -- # return 1 00:05:01.942 12:08:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:01.942 1+0 records in 00:05:01.942 1+0 records out 00:05:01.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0337874 s, 31.0 MB/s 00:05:01.942 12:08:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.942 12:08:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.942 12:08:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:01.942 12:08:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:01.942 12:08:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:01.942 No valid GPT data, bailing 00:05:01.942 12:08:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:01.942 12:08:31 -- scripts/common.sh@394 -- # pt= 00:05:01.942 12:08:31 -- scripts/common.sh@395 -- # return 1 00:05:01.942 12:08:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:01.942 1+0 records in 00:05:01.942 1+0 records out 00:05:01.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048255 s, 217 MB/s 00:05:01.942 12:08:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.942 12:08:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.942 12:08:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:01.942 12:08:31 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:01.942 12:08:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:01.942 No valid GPT data, bailing 00:05:01.942 12:08:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:01.942 12:08:31 -- scripts/common.sh@394 -- # pt= 00:05:01.942 12:08:31 -- scripts/common.sh@395 -- # return 1 00:05:01.942 12:08:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:01.942 1+0 records in 00:05:01.942 1+0 records out 00:05:01.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501566 s, 209 MB/s 00:05:01.942 12:08:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.942 12:08:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.942 12:08:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:01.942 12:08:31 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:01.942 12:08:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:01.942 No valid GPT data, bailing 00:05:01.942 12:08:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:01.942 12:08:31 -- scripts/common.sh@394 -- # pt= 00:05:01.942 12:08:31 -- scripts/common.sh@395 -- # return 1 00:05:01.942 12:08:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:01.942 1+0 records in 00:05:01.942 1+0 records out 00:05:01.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504771 s, 208 MB/s 00:05:01.942 12:08:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.942 12:08:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.942 12:08:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:01.942 12:08:31 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:01.942 12:08:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:01.942 No valid GPT data, bailing 00:05:01.942 12:08:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:01.942 12:08:31 -- scripts/common.sh@394 -- # pt= 00:05:01.942 12:08:31 -- scripts/common.sh@395 -- # return 1 00:05:01.942 12:08:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:01.942 1+0 records in 00:05:01.942 1+0 records out 00:05:01.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595759 s, 176 MB/s 00:05:01.942 12:08:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.942 12:08:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.942 12:08:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:01.942 12:08:31 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:01.942 12:08:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:01.942 No valid GPT data, bailing 00:05:01.942 12:08:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:01.942 12:08:31 -- scripts/common.sh@394 -- # pt= 00:05:01.943 12:08:31 -- scripts/common.sh@395 -- # return 1 00:05:01.943 12:08:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:01.943 1+0 records in 00:05:01.943 1+0 records out 00:05:01.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499374 s, 210 MB/s 00:05:01.943 12:08:31 -- spdk/autotest.sh@105 -- # sync 00:05:01.943 12:08:31 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:01.943 12:08:31 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:01.943 12:08:31 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:02.556 12:08:33 -- spdk/autotest.sh@111 -- # uname -s 00:05:02.556 12:08:33 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:02.556 12:08:33 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:02.556 12:08:33 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:02.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.391 Hugepages 00:05:03.391 node hugesize free / total 00:05:03.391 node0 1048576kB 0 / 0 00:05:03.391 node0 2048kB 0 / 0 00:05:03.391 00:05:03.391 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:03.391 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:03.391 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:03.391 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:03.391 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:03.651 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:03.651 12:08:34 -- spdk/autotest.sh@117 -- # uname -s 00:05:03.651 12:08:34 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:03.651 12:08:34 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:03.651 12:08:34 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.911 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.479 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.479 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.479 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.479 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.737 12:08:35 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:05.693 12:08:36 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:05.693 12:08:36 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:05.693 12:08:36 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:05.693 12:08:36 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:05.693 12:08:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:05.693 12:08:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:05.693 12:08:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.693 12:08:36 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.693 12:08:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:05.693 12:08:36 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:05.693 12:08:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:05.693 12:08:36 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.211 Waiting for block devices as requested 00:05:06.211 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.211 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.471 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.471 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:11.763 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:11.763 12:08:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:11.763 12:08:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:11.763 12:08:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:11.763 12:08:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:11.763 12:08:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:11.763 12:08:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:11.763 12:08:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:11.763 12:08:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:11.763 12:08:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:11.763 12:08:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:11.763 12:08:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:11.763 12:08:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1543 -- # continue 00:05:11.763 12:08:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:11.763 12:08:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:11.763 12:08:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:11.763 12:08:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:11.763 12:08:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:11.763 12:08:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:11.763 12:08:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:11.763 12:08:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:11.763 12:08:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:11.763 12:08:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:11.763 12:08:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:11.763 12:08:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1543 -- # continue 00:05:11.763 12:08:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:11.763 12:08:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:11.763 12:08:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:11.763 12:08:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:11.763 12:08:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:11.763 12:08:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:11.763 12:08:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:11.763 12:08:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:11.763 12:08:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:11.763 12:08:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:11.763 12:08:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:11.763 12:08:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1543 -- # continue 00:05:11.763 12:08:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:11.763 12:08:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:11.763 12:08:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:11.763 12:08:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:11.763 12:08:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:11.763 12:08:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:11.763 12:08:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:11.763 12:08:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:11.763 12:08:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:11.763 12:08:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:11.763 12:08:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:11.763 12:08:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:11.763 12:08:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:11.763 12:08:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:11.763 12:08:42 -- common/autotest_common.sh@1543 -- # continue 00:05:11.763 12:08:42 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:11.763 12:08:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.763 12:08:42 -- common/autotest_common.sh@10 -- # set +x 00:05:11.763 12:08:42 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:11.763 12:08:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.763 12:08:42 -- common/autotest_common.sh@10 -- # set +x 00:05:11.763 12:08:42 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.598 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:12.598 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:12.860 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:12.860 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:12.860 12:08:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:12.860 12:08:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.860 12:08:43 -- common/autotest_common.sh@10 -- # set +x 00:05:12.860 12:08:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:12.860 12:08:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:12.860 12:08:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:12.860 12:08:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:12.860 12:08:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:12.860 12:08:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:12.860 12:08:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:12.860 12:08:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:12.860 12:08:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:12.860 12:08:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:12.860 12:08:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.860 12:08:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:12.860 12:08:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:12.860 12:08:43 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:12.860 12:08:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:12.860 12:08:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:12.860 12:08:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:12.860 12:08:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:12.860 12:08:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:12.860 12:08:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:12.860 12:08:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:12.860 12:08:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:12.860 12:08:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:12.860 12:08:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:12.860 12:08:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:12.860 12:08:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:12.860 12:08:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:12.860 12:08:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:12.860 12:08:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:12.860 12:08:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:12.860 12:08:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:12.860 12:08:43 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:12.860 12:08:43 -- common/autotest_common.sh@1572 -- # return 0 00:05:12.860 12:08:43 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:12.860 12:08:43 -- common/autotest_common.sh@1580 -- # return 0 00:05:12.860 12:08:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:12.860 12:08:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:12.860 12:08:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:12.860 12:08:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:12.860 12:08:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:12.860 12:08:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.860 12:08:43 -- common/autotest_common.sh@10 -- # set +x 00:05:12.860 12:08:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:12.860 12:08:43 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:12.860 12:08:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.860 12:08:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.860 12:08:43 -- common/autotest_common.sh@10 -- # set +x 00:05:12.860 ************************************ 00:05:12.860 START TEST env 00:05:12.860 ************************************ 00:05:12.860 12:08:43 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:13.121 * Looking for test storage... 00:05:13.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:13.121 12:08:43 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.121 12:08:43 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.121 12:08:43 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.121 12:08:43 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.121 12:08:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.121 12:08:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.121 12:08:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.121 12:08:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.121 12:08:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.121 12:08:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.121 12:08:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.121 12:08:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.121 12:08:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.121 12:08:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.121 12:08:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.121 12:08:43 env -- scripts/common.sh@344 -- # case "$op" in 00:05:13.121 12:08:43 env -- scripts/common.sh@345 -- # : 1 00:05:13.121 12:08:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.122 12:08:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.122 12:08:43 env -- scripts/common.sh@365 -- # decimal 1 00:05:13.122 12:08:43 env -- scripts/common.sh@353 -- # local d=1 00:05:13.122 12:08:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.122 12:08:43 env -- scripts/common.sh@355 -- # echo 1 00:05:13.122 12:08:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.122 12:08:43 env -- scripts/common.sh@366 -- # decimal 2 00:05:13.122 12:08:43 env -- scripts/common.sh@353 -- # local d=2 00:05:13.122 12:08:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.122 12:08:43 env -- scripts/common.sh@355 -- # echo 2 00:05:13.122 12:08:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.122 12:08:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.122 12:08:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.122 12:08:43 env -- scripts/common.sh@368 -- # return 0 00:05:13.122 12:08:43 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.122 12:08:43 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.122 --rc genhtml_branch_coverage=1 00:05:13.122 --rc genhtml_function_coverage=1 00:05:13.122 --rc genhtml_legend=1 00:05:13.122 --rc geninfo_all_blocks=1 00:05:13.122 --rc geninfo_unexecuted_blocks=1 00:05:13.122 00:05:13.122 ' 00:05:13.122 12:08:43 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.122 --rc genhtml_branch_coverage=1 00:05:13.122 --rc genhtml_function_coverage=1 00:05:13.122 --rc genhtml_legend=1 00:05:13.122 --rc geninfo_all_blocks=1 00:05:13.122 --rc geninfo_unexecuted_blocks=1 00:05:13.122 00:05:13.122 ' 00:05:13.122 12:08:43 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.122 --rc genhtml_branch_coverage=1 00:05:13.122 --rc genhtml_function_coverage=1 00:05:13.122 --rc genhtml_legend=1 00:05:13.122 --rc geninfo_all_blocks=1 00:05:13.122 --rc geninfo_unexecuted_blocks=1 00:05:13.122 00:05:13.122 ' 00:05:13.122 12:08:43 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.122 --rc genhtml_branch_coverage=1 00:05:13.122 --rc genhtml_function_coverage=1 00:05:13.122 --rc genhtml_legend=1 00:05:13.122 --rc geninfo_all_blocks=1 00:05:13.122 --rc geninfo_unexecuted_blocks=1 00:05:13.122 00:05:13.122 ' 00:05:13.122 12:08:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:13.122 12:08:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.122 12:08:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.122 12:08:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.122 ************************************ 00:05:13.122 START TEST env_memory 00:05:13.122 ************************************ 00:05:13.122 12:08:43 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:13.122 00:05:13.122 00:05:13.122 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.122 http://cunit.sourceforge.net/ 00:05:13.122 00:05:13.122 00:05:13.122 Suite: memory 00:05:13.122 Test: alloc and free memory map ...[2024-12-05 12:08:43.925609] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:13.122 passed 00:05:13.122 Test: mem map translation ...[2024-12-05 12:08:43.964306] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:13.122 [2024-12-05 12:08:43.964345] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:13.122 [2024-12-05 12:08:43.964401] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:13.122 [2024-12-05 12:08:43.964416] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:13.382 passed 00:05:13.382 Test: mem map registration ...[2024-12-05 12:08:44.032443] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:13.382 [2024-12-05 12:08:44.032487] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:13.382 passed 00:05:13.382 Test: mem map adjacent registrations ...passed 00:05:13.382 00:05:13.382 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.382 suites 1 1 n/a 0 0 00:05:13.382 tests 4 4 4 0 0 00:05:13.382 asserts 152 152 152 0 n/a 00:05:13.382 00:05:13.382 Elapsed time = 0.233 seconds 00:05:13.382 00:05:13.382 real 0m0.263s 00:05:13.382 user 0m0.233s 00:05:13.382 sys 0m0.024s 00:05:13.382 12:08:44 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.382 ************************************ 00:05:13.382 END TEST env_memory 00:05:13.382 ************************************ 00:05:13.382 12:08:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:13.382 12:08:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:13.382 12:08:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.382 12:08:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.382 12:08:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.382 ************************************ 00:05:13.382 START TEST env_vtophys 00:05:13.382 ************************************ 00:05:13.382 12:08:44 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:13.382 EAL: lib.eal log level changed from notice to debug 00:05:13.382 EAL: Detected lcore 0 as core 0 on socket 0 00:05:13.382 EAL: Detected lcore 1 as core 0 on socket 0 00:05:13.382 EAL: Detected lcore 2 as core 0 on socket 0 00:05:13.382 EAL: Detected lcore 3 as core 0 on socket 0 00:05:13.382 EAL: Detected lcore 4 as core 0 on socket 0 00:05:13.382 EAL: Detected lcore 5 as core 0 on socket 0 00:05:13.382 EAL: Detected lcore 6 as core 0 on socket 0 00:05:13.382 EAL: Detected lcore 7 as core 0 on socket 0 00:05:13.382 EAL: Detected lcore 8 as core 0 on socket 0 00:05:13.382 EAL: Detected lcore 9 as core 0 on socket 0 00:05:13.382 EAL: Maximum logical cores by configuration: 128 00:05:13.382 EAL: Detected CPU lcores: 10 00:05:13.382 EAL: Detected NUMA nodes: 1 00:05:13.382 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:13.382 EAL: Detected shared linkage of DPDK 00:05:13.642 EAL: No shared files mode enabled, IPC will be disabled 00:05:13.642 EAL: Selected IOVA mode 'PA' 00:05:13.642 EAL: Probing VFIO support... 00:05:13.642 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:13.642 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:13.642 EAL: Ask a virtual area of 0x2e000 bytes 00:05:13.642 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:13.642 EAL: Setting up physically contiguous memory... 00:05:13.642 EAL: Setting maximum number of open files to 524288 00:05:13.642 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:13.642 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:13.642 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.642 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:13.642 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.642 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.642 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:13.642 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:13.642 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.642 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:13.642 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.642 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.642 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:13.642 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:13.642 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.642 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:13.642 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.642 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.642 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:13.642 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:13.642 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.642 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:13.642 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.642 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.642 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:13.642 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:13.642 EAL: Hugepages will be freed exactly as allocated. 00:05:13.642 EAL: No shared files mode enabled, IPC is disabled 00:05:13.642 EAL: No shared files mode enabled, IPC is disabled 00:05:13.642 EAL: TSC frequency is ~2600000 KHz 00:05:13.642 EAL: Main lcore 0 is ready (tid=7f9a470d9a40;cpuset=[0]) 00:05:13.642 EAL: Trying to obtain current memory policy. 00:05:13.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.642 EAL: Restoring previous memory policy: 0 00:05:13.642 EAL: request: mp_malloc_sync 00:05:13.642 EAL: No shared files mode enabled, IPC is disabled 00:05:13.642 EAL: Heap on socket 0 was expanded by 2MB 00:05:13.642 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:13.642 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:13.642 EAL: Mem event callback 'spdk:(nil)' registered 00:05:13.642 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:13.642 00:05:13.642 00:05:13.642 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.642 http://cunit.sourceforge.net/ 00:05:13.642 00:05:13.642 00:05:13.642 Suite: components_suite 00:05:13.902 Test: vtophys_malloc_test ...passed 00:05:13.902 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:13.902 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.902 EAL: Restoring previous memory policy: 4 00:05:13.902 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.902 EAL: request: mp_malloc_sync 00:05:13.902 EAL: No shared files mode enabled, IPC is disabled 00:05:13.902 EAL: Heap on socket 0 was expanded by 4MB 00:05:13.902 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.902 EAL: request: mp_malloc_sync 00:05:13.902 EAL: No shared files mode enabled, IPC is disabled 00:05:13.902 EAL: Heap on socket 0 was shrunk by 4MB 00:05:13.902 EAL: Trying to obtain current memory policy. 00:05:13.902 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.902 EAL: Restoring previous memory policy: 4 00:05:13.902 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.902 EAL: request: mp_malloc_sync 00:05:13.902 EAL: No shared files mode enabled, IPC is disabled 00:05:13.902 EAL: Heap on socket 0 was expanded by 6MB 00:05:13.902 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.902 EAL: request: mp_malloc_sync 00:05:13.902 EAL: No shared files mode enabled, IPC is disabled 00:05:13.902 EAL: Heap on socket 0 was shrunk by 6MB 00:05:13.902 EAL: Trying to obtain current memory policy. 00:05:13.902 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.902 EAL: Restoring previous memory policy: 4 00:05:13.902 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.902 EAL: request: mp_malloc_sync 00:05:13.902 EAL: No shared files mode enabled, IPC is disabled 00:05:13.902 EAL: Heap on socket 0 was expanded by 10MB 00:05:13.902 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.902 EAL: request: mp_malloc_sync 00:05:13.903 EAL: No shared files mode enabled, IPC is disabled 00:05:13.903 EAL: Heap on socket 0 was shrunk by 10MB 00:05:13.903 EAL: Trying to obtain current memory policy. 00:05:13.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.903 EAL: Restoring previous memory policy: 4 00:05:13.903 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.903 EAL: request: mp_malloc_sync 00:05:13.903 EAL: No shared files mode enabled, IPC is disabled 00:05:13.903 EAL: Heap on socket 0 was expanded by 18MB 00:05:14.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.163 EAL: request: mp_malloc_sync 00:05:14.163 EAL: No shared files mode enabled, IPC is disabled 00:05:14.163 EAL: Heap on socket 0 was shrunk by 18MB 00:05:14.163 EAL: Trying to obtain current memory policy. 00:05:14.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.163 EAL: Restoring previous memory policy: 4 00:05:14.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.163 EAL: request: mp_malloc_sync 00:05:14.163 EAL: No shared files mode enabled, IPC is disabled 00:05:14.163 EAL: Heap on socket 0 was expanded by 34MB 00:05:14.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.163 EAL: request: mp_malloc_sync 00:05:14.163 EAL: No shared files mode enabled, IPC is disabled 00:05:14.163 EAL: Heap on socket 0 was shrunk by 34MB 00:05:14.163 EAL: Trying to obtain current memory policy. 00:05:14.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.163 EAL: Restoring previous memory policy: 4 00:05:14.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.163 EAL: request: mp_malloc_sync 00:05:14.163 EAL: No shared files mode enabled, IPC is disabled 00:05:14.163 EAL: Heap on socket 0 was expanded by 66MB 00:05:14.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.163 EAL: request: mp_malloc_sync 00:05:14.163 EAL: No shared files mode enabled, IPC is disabled 00:05:14.163 EAL: Heap on socket 0 was shrunk by 66MB 00:05:14.422 EAL: Trying to obtain current memory policy. 00:05:14.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.422 EAL: Restoring previous memory policy: 4 00:05:14.422 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.422 EAL: request: mp_malloc_sync 00:05:14.422 EAL: No shared files mode enabled, IPC is disabled 00:05:14.423 EAL: Heap on socket 0 was expanded by 130MB 00:05:14.423 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.423 EAL: request: mp_malloc_sync 00:05:14.423 EAL: No shared files mode enabled, IPC is disabled 00:05:14.423 EAL: Heap on socket 0 was shrunk by 130MB 00:05:14.681 EAL: Trying to obtain current memory policy. 00:05:14.681 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.681 EAL: Restoring previous memory policy: 4 00:05:14.681 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.681 EAL: request: mp_malloc_sync 00:05:14.681 EAL: No shared files mode enabled, IPC is disabled 00:05:14.681 EAL: Heap on socket 0 was expanded by 258MB 00:05:14.940 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.940 EAL: request: mp_malloc_sync 00:05:14.940 EAL: No shared files mode enabled, IPC is disabled 00:05:14.940 EAL: Heap on socket 0 was shrunk by 258MB 00:05:15.199 EAL: Trying to obtain current memory policy. 00:05:15.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.457 EAL: Restoring previous memory policy: 4 00:05:15.457 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.457 EAL: request: mp_malloc_sync 00:05:15.457 EAL: No shared files mode enabled, IPC is disabled 00:05:15.457 EAL: Heap on socket 0 was expanded by 514MB 00:05:16.028 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.028 EAL: request: mp_malloc_sync 00:05:16.028 EAL: No shared files mode enabled, IPC is disabled 00:05:16.028 EAL: Heap on socket 0 was shrunk by 514MB 00:05:16.703 EAL: Trying to obtain current memory policy. 00:05:16.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.703 EAL: Restoring previous memory policy: 4 00:05:16.703 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.703 EAL: request: mp_malloc_sync 00:05:16.703 EAL: No shared files mode enabled, IPC is disabled 00:05:16.703 EAL: Heap on socket 0 was expanded by 1026MB 00:05:18.089 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.089 EAL: request: mp_malloc_sync 00:05:18.089 EAL: No shared files mode enabled, IPC is disabled 00:05:18.089 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:19.030 passed 00:05:19.030 00:05:19.030 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.030 suites 1 1 n/a 0 0 00:05:19.030 tests 2 2 2 0 0 00:05:19.030 asserts 5887 5887 5887 0 n/a 00:05:19.030 00:05:19.030 Elapsed time = 5.419 seconds 00:05:19.030 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.030 EAL: request: mp_malloc_sync 00:05:19.030 EAL: No shared files mode enabled, IPC is disabled 00:05:19.030 EAL: Heap on socket 0 was shrunk by 2MB 00:05:19.030 EAL: No shared files mode enabled, IPC is disabled 00:05:19.030 EAL: No shared files mode enabled, IPC is disabled 00:05:19.030 EAL: No shared files mode enabled, IPC is disabled 00:05:19.030 00:05:19.030 real 0m5.697s 00:05:19.030 user 0m4.760s 00:05:19.030 sys 0m0.783s 00:05:19.030 12:08:49 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.030 ************************************ 00:05:19.030 12:08:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:19.030 END TEST env_vtophys 00:05:19.030 ************************************ 00:05:19.289 12:08:49 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:19.289 12:08:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.289 12:08:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.289 12:08:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.289 ************************************ 00:05:19.289 START TEST env_pci 00:05:19.289 ************************************ 00:05:19.289 12:08:49 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:19.289 00:05:19.289 00:05:19.289 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.289 http://cunit.sourceforge.net/ 00:05:19.289 00:05:19.289 00:05:19.289 Suite: pci 00:05:19.289 Test: pci_hook ...[2024-12-05 12:08:49.983839] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57151 has claimed it 00:05:19.289 EAL: Cannot find device (10000:00:01.0) 00:05:19.289 EAL: Failed to attach device on primary process 00:05:19.289 passed 00:05:19.289 00:05:19.289 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.289 suites 1 1 n/a 0 0 00:05:19.289 tests 1 1 1 0 0 00:05:19.289 asserts 25 25 25 0 n/a 00:05:19.289 00:05:19.289 Elapsed time = 0.005 seconds 00:05:19.289 00:05:19.289 real 0m0.061s 00:05:19.289 user 0m0.029s 00:05:19.289 sys 0m0.029s 00:05:19.289 12:08:50 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.289 ************************************ 00:05:19.289 END TEST env_pci 00:05:19.289 ************************************ 00:05:19.289 12:08:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:19.289 12:08:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:19.289 12:08:50 env -- env/env.sh@15 -- # uname 00:05:19.289 12:08:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:19.289 12:08:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:19.289 12:08:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:19.289 12:08:50 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:19.289 12:08:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.289 12:08:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.289 ************************************ 00:05:19.289 START TEST env_dpdk_post_init 00:05:19.289 ************************************ 00:05:19.289 12:08:50 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:19.289 EAL: Detected CPU lcores: 10 00:05:19.289 EAL: Detected NUMA nodes: 1 00:05:19.289 EAL: Detected shared linkage of DPDK 00:05:19.289 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:19.289 EAL: Selected IOVA mode 'PA' 00:05:19.549 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:19.549 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:19.549 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:19.549 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:19.549 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:19.549 Starting DPDK initialization... 00:05:19.549 Starting SPDK post initialization... 00:05:19.549 SPDK NVMe probe 00:05:19.549 Attaching to 0000:00:10.0 00:05:19.549 Attaching to 0000:00:11.0 00:05:19.549 Attaching to 0000:00:12.0 00:05:19.549 Attaching to 0000:00:13.0 00:05:19.549 Attached to 0000:00:10.0 00:05:19.549 Attached to 0000:00:11.0 00:05:19.549 Attached to 0000:00:13.0 00:05:19.549 Attached to 0000:00:12.0 00:05:19.549 Cleaning up... 00:05:19.549 ************************************ 00:05:19.549 END TEST env_dpdk_post_init 00:05:19.549 ************************************ 00:05:19.549 00:05:19.549 real 0m0.257s 00:05:19.549 user 0m0.092s 00:05:19.549 sys 0m0.067s 00:05:19.549 12:08:50 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.549 12:08:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:19.549 12:08:50 env -- env/env.sh@26 -- # uname 00:05:19.549 12:08:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:19.549 12:08:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:19.549 12:08:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.549 12:08:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.549 12:08:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.549 ************************************ 00:05:19.549 START TEST env_mem_callbacks 00:05:19.549 ************************************ 00:05:19.549 12:08:50 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:19.810 EAL: Detected CPU lcores: 10 00:05:19.810 EAL: Detected NUMA nodes: 1 00:05:19.810 EAL: Detected shared linkage of DPDK 00:05:19.810 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:19.810 EAL: Selected IOVA mode 'PA' 00:05:19.810 00:05:19.810 00:05:19.810 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.810 http://cunit.sourceforge.net/ 00:05:19.810 00:05:19.810 00:05:19.810 Suite: memory 00:05:19.810 Test: test ... 00:05:19.810 register 0x200000200000 2097152 00:05:19.810 malloc 3145728 00:05:19.810 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:19.810 register 0x200000400000 4194304 00:05:19.810 buf 0x2000004fffc0 len 3145728 PASSED 00:05:19.810 malloc 64 00:05:19.810 buf 0x2000004ffec0 len 64 PASSED 00:05:19.810 malloc 4194304 00:05:19.810 register 0x200000800000 6291456 00:05:19.810 buf 0x2000009fffc0 len 4194304 PASSED 00:05:19.810 free 0x2000004fffc0 3145728 00:05:19.810 free 0x2000004ffec0 64 00:05:19.810 unregister 0x200000400000 4194304 PASSED 00:05:19.810 free 0x2000009fffc0 4194304 00:05:19.810 unregister 0x200000800000 6291456 PASSED 00:05:19.810 malloc 8388608 00:05:19.810 register 0x200000400000 10485760 00:05:19.810 buf 0x2000005fffc0 len 8388608 PASSED 00:05:19.810 free 0x2000005fffc0 8388608 00:05:19.810 unregister 0x200000400000 10485760 PASSED 00:05:19.810 passed 00:05:19.810 00:05:19.810 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.810 suites 1 1 n/a 0 0 00:05:19.810 tests 1 1 1 0 0 00:05:19.810 asserts 15 15 15 0 n/a 00:05:19.810 00:05:19.810 Elapsed time = 0.046 seconds 00:05:19.810 00:05:19.810 real 0m0.221s 00:05:19.810 user 0m0.068s 00:05:19.810 sys 0m0.049s 00:05:19.810 12:08:50 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.810 12:08:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:19.810 ************************************ 00:05:19.810 END TEST env_mem_callbacks 00:05:19.810 ************************************ 00:05:19.810 ************************************ 00:05:19.810 END TEST env 00:05:19.810 ************************************ 00:05:19.810 00:05:19.810 real 0m6.953s 00:05:19.810 user 0m5.339s 00:05:19.810 sys 0m1.173s 00:05:19.810 12:08:50 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.810 12:08:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.071 12:08:50 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:20.071 12:08:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.071 12:08:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.071 12:08:50 -- common/autotest_common.sh@10 -- # set +x 00:05:20.071 ************************************ 00:05:20.071 START TEST rpc 00:05:20.071 ************************************ 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:20.071 * Looking for test storage... 00:05:20.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.071 12:08:50 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.071 12:08:50 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.071 12:08:50 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.071 12:08:50 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.071 12:08:50 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.071 12:08:50 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.071 12:08:50 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.071 12:08:50 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.071 12:08:50 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.071 12:08:50 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.071 12:08:50 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.071 12:08:50 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:20.071 12:08:50 rpc -- scripts/common.sh@345 -- # : 1 00:05:20.071 12:08:50 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.071 12:08:50 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.071 12:08:50 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:20.071 12:08:50 rpc -- scripts/common.sh@353 -- # local d=1 00:05:20.071 12:08:50 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.071 12:08:50 rpc -- scripts/common.sh@355 -- # echo 1 00:05:20.071 12:08:50 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.071 12:08:50 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:20.071 12:08:50 rpc -- scripts/common.sh@353 -- # local d=2 00:05:20.071 12:08:50 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.071 12:08:50 rpc -- scripts/common.sh@355 -- # echo 2 00:05:20.071 12:08:50 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.071 12:08:50 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.071 12:08:50 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.071 12:08:50 rpc -- scripts/common.sh@368 -- # return 0 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.071 --rc genhtml_branch_coverage=1 00:05:20.071 --rc genhtml_function_coverage=1 00:05:20.071 --rc genhtml_legend=1 00:05:20.071 --rc geninfo_all_blocks=1 00:05:20.071 --rc geninfo_unexecuted_blocks=1 00:05:20.071 00:05:20.071 ' 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.071 --rc genhtml_branch_coverage=1 00:05:20.071 --rc genhtml_function_coverage=1 00:05:20.071 --rc genhtml_legend=1 00:05:20.071 --rc geninfo_all_blocks=1 00:05:20.071 --rc geninfo_unexecuted_blocks=1 00:05:20.071 00:05:20.071 ' 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.071 --rc genhtml_branch_coverage=1 00:05:20.071 --rc genhtml_function_coverage=1 00:05:20.071 --rc genhtml_legend=1 00:05:20.071 --rc geninfo_all_blocks=1 00:05:20.071 --rc geninfo_unexecuted_blocks=1 00:05:20.071 00:05:20.071 ' 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.071 --rc genhtml_branch_coverage=1 00:05:20.071 --rc genhtml_function_coverage=1 00:05:20.071 --rc genhtml_legend=1 00:05:20.071 --rc geninfo_all_blocks=1 00:05:20.071 --rc geninfo_unexecuted_blocks=1 00:05:20.071 00:05:20.071 ' 00:05:20.071 12:08:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57278 00:05:20.071 12:08:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.071 12:08:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57278 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@835 -- # '[' -z 57278 ']' 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.071 12:08:50 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.071 12:08:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.329 [2024-12-05 12:08:50.965053] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:20.329 [2024-12-05 12:08:50.965374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57278 ] 00:05:20.329 [2024-12-05 12:08:51.128101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.588 [2024-12-05 12:08:51.248804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:20.588 [2024-12-05 12:08:51.248870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57278' to capture a snapshot of events at runtime. 00:05:20.588 [2024-12-05 12:08:51.248882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:20.588 [2024-12-05 12:08:51.248893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:20.588 [2024-12-05 12:08:51.248903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57278 for offline analysis/debug. 00:05:20.588 [2024-12-05 12:08:51.249860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.159 12:08:51 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.159 12:08:51 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:21.159 12:08:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:21.159 12:08:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:21.159 12:08:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:21.159 12:08:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:21.159 12:08:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.159 12:08:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.159 12:08:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.159 ************************************ 00:05:21.159 START TEST rpc_integrity 00:05:21.159 ************************************ 00:05:21.159 12:08:51 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:21.159 12:08:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:21.159 12:08:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.159 12:08:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.159 12:08:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.159 12:08:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:21.159 12:08:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:21.159 12:08:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:21.159 12:08:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:21.159 12:08:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.159 12:08:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.159 12:08:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.159 12:08:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:21.159 12:08:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:21.159 12:08:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.159 12:08:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.159 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.159 12:08:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:21.159 { 00:05:21.159 "name": "Malloc0", 00:05:21.159 "aliases": [ 00:05:21.159 "1397aae4-c19c-4567-90d7-fbfd15770578" 00:05:21.159 ], 00:05:21.159 "product_name": "Malloc disk", 00:05:21.159 "block_size": 512, 00:05:21.159 "num_blocks": 16384, 00:05:21.159 "uuid": "1397aae4-c19c-4567-90d7-fbfd15770578", 00:05:21.159 "assigned_rate_limits": { 00:05:21.159 "rw_ios_per_sec": 0, 00:05:21.159 "rw_mbytes_per_sec": 0, 00:05:21.159 "r_mbytes_per_sec": 0, 00:05:21.159 "w_mbytes_per_sec": 0 00:05:21.159 }, 00:05:21.159 "claimed": false, 00:05:21.159 "zoned": false, 00:05:21.159 "supported_io_types": { 00:05:21.159 "read": true, 00:05:21.159 "write": true, 00:05:21.159 "unmap": true, 00:05:21.159 "flush": true, 00:05:21.159 "reset": true, 00:05:21.159 "nvme_admin": false, 00:05:21.159 "nvme_io": false, 00:05:21.159 "nvme_io_md": false, 00:05:21.159 "write_zeroes": true, 00:05:21.159 "zcopy": true, 00:05:21.159 "get_zone_info": false, 00:05:21.159 "zone_management": false, 00:05:21.159 "zone_append": false, 00:05:21.159 "compare": false, 00:05:21.159 "compare_and_write": false, 00:05:21.159 "abort": true, 00:05:21.159 "seek_hole": false, 00:05:21.159 "seek_data": false, 00:05:21.159 "copy": true, 00:05:21.159 "nvme_iov_md": false 00:05:21.159 }, 00:05:21.159 "memory_domains": [ 00:05:21.159 { 00:05:21.159 "dma_device_id": "system", 00:05:21.159 "dma_device_type": 1 00:05:21.159 }, 00:05:21.159 { 00:05:21.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.159 "dma_device_type": 2 00:05:21.159 } 00:05:21.159 ], 00:05:21.159 "driver_specific": {} 00:05:21.159 } 00:05:21.159 ]' 00:05:21.159 12:08:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:21.421 12:08:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:21.421 12:08:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.421 [2024-12-05 12:08:52.051913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:21.421 [2024-12-05 12:08:52.051974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:21.421 [2024-12-05 12:08:52.052001] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:21.421 [2024-12-05 12:08:52.052013] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:21.421 [2024-12-05 12:08:52.054375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:21.421 [2024-12-05 12:08:52.054420] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:21.421 Passthru0 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.421 12:08:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.421 12:08:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:21.421 { 00:05:21.421 "name": "Malloc0", 00:05:21.421 "aliases": [ 00:05:21.421 "1397aae4-c19c-4567-90d7-fbfd15770578" 00:05:21.421 ], 00:05:21.421 "product_name": "Malloc disk", 00:05:21.421 "block_size": 512, 00:05:21.421 "num_blocks": 16384, 00:05:21.421 "uuid": "1397aae4-c19c-4567-90d7-fbfd15770578", 00:05:21.421 "assigned_rate_limits": { 00:05:21.421 "rw_ios_per_sec": 0, 00:05:21.421 "rw_mbytes_per_sec": 0, 00:05:21.421 "r_mbytes_per_sec": 0, 00:05:21.421 "w_mbytes_per_sec": 0 00:05:21.421 }, 00:05:21.421 "claimed": true, 00:05:21.421 "claim_type": "exclusive_write", 00:05:21.421 "zoned": false, 00:05:21.421 "supported_io_types": { 00:05:21.421 "read": true, 00:05:21.421 "write": true, 00:05:21.421 "unmap": true, 00:05:21.421 "flush": true, 00:05:21.421 "reset": true, 00:05:21.421 "nvme_admin": false, 00:05:21.421 "nvme_io": false, 00:05:21.421 "nvme_io_md": false, 00:05:21.421 "write_zeroes": true, 00:05:21.421 "zcopy": true, 00:05:21.421 "get_zone_info": false, 00:05:21.421 "zone_management": false, 00:05:21.421 "zone_append": false, 00:05:21.421 "compare": false, 00:05:21.421 "compare_and_write": false, 00:05:21.421 "abort": true, 00:05:21.421 "seek_hole": false, 00:05:21.421 "seek_data": false, 00:05:21.421 "copy": true, 00:05:21.421 "nvme_iov_md": false 00:05:21.421 }, 00:05:21.421 "memory_domains": [ 00:05:21.421 { 00:05:21.421 "dma_device_id": "system", 00:05:21.421 "dma_device_type": 1 00:05:21.421 }, 00:05:21.421 { 00:05:21.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.421 "dma_device_type": 2 00:05:21.421 } 00:05:21.421 ], 00:05:21.421 "driver_specific": {} 00:05:21.421 }, 00:05:21.421 { 00:05:21.421 "name": "Passthru0", 00:05:21.421 "aliases": [ 00:05:21.421 "e8f768bf-3a0a-5b58-944d-b682702c4406" 00:05:21.421 ], 00:05:21.421 "product_name": "passthru", 00:05:21.421 "block_size": 512, 00:05:21.421 "num_blocks": 16384, 00:05:21.421 "uuid": "e8f768bf-3a0a-5b58-944d-b682702c4406", 00:05:21.421 "assigned_rate_limits": { 00:05:21.421 "rw_ios_per_sec": 0, 00:05:21.421 "rw_mbytes_per_sec": 0, 00:05:21.421 "r_mbytes_per_sec": 0, 00:05:21.421 "w_mbytes_per_sec": 0 00:05:21.421 }, 00:05:21.421 "claimed": false, 00:05:21.421 "zoned": false, 00:05:21.421 "supported_io_types": { 00:05:21.421 "read": true, 00:05:21.421 "write": true, 00:05:21.421 "unmap": true, 00:05:21.421 "flush": true, 00:05:21.421 "reset": true, 00:05:21.421 "nvme_admin": false, 00:05:21.421 "nvme_io": false, 00:05:21.421 "nvme_io_md": false, 00:05:21.421 "write_zeroes": true, 00:05:21.421 "zcopy": true, 00:05:21.421 "get_zone_info": false, 00:05:21.421 "zone_management": false, 00:05:21.421 "zone_append": false, 00:05:21.421 "compare": false, 00:05:21.421 "compare_and_write": false, 00:05:21.421 "abort": true, 00:05:21.421 "seek_hole": false, 00:05:21.421 "seek_data": false, 00:05:21.421 "copy": true, 00:05:21.421 "nvme_iov_md": false 00:05:21.421 }, 00:05:21.421 "memory_domains": [ 00:05:21.421 { 00:05:21.421 "dma_device_id": "system", 00:05:21.421 "dma_device_type": 1 00:05:21.421 }, 00:05:21.421 { 00:05:21.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.421 "dma_device_type": 2 00:05:21.421 } 00:05:21.421 ], 00:05:21.421 "driver_specific": { 00:05:21.421 "passthru": { 00:05:21.421 "name": "Passthru0", 00:05:21.421 "base_bdev_name": "Malloc0" 00:05:21.421 } 00:05:21.421 } 00:05:21.421 } 00:05:21.421 ]' 00:05:21.421 12:08:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:21.421 12:08:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:21.421 12:08:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.421 12:08:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.421 12:08:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.421 12:08:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:21.421 12:08:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:21.421 ************************************ 00:05:21.421 END TEST rpc_integrity 00:05:21.421 ************************************ 00:05:21.421 12:08:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:21.421 00:05:21.421 real 0m0.253s 00:05:21.421 user 0m0.131s 00:05:21.421 sys 0m0.033s 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.421 12:08:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.421 12:08:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:21.421 12:08:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.421 12:08:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.421 12:08:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.421 ************************************ 00:05:21.421 START TEST rpc_plugins 00:05:21.421 ************************************ 00:05:21.421 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:21.421 12:08:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:21.421 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.421 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.421 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.421 12:08:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:21.421 12:08:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:21.421 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.421 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.421 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.421 12:08:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:21.421 { 00:05:21.421 "name": "Malloc1", 00:05:21.421 "aliases": [ 00:05:21.421 "cfb21c01-8030-455f-9bdc-d93f663f82d0" 00:05:21.421 ], 00:05:21.421 "product_name": "Malloc disk", 00:05:21.421 "block_size": 4096, 00:05:21.421 "num_blocks": 256, 00:05:21.421 "uuid": "cfb21c01-8030-455f-9bdc-d93f663f82d0", 00:05:21.421 "assigned_rate_limits": { 00:05:21.421 "rw_ios_per_sec": 0, 00:05:21.421 "rw_mbytes_per_sec": 0, 00:05:21.421 "r_mbytes_per_sec": 0, 00:05:21.421 "w_mbytes_per_sec": 0 00:05:21.421 }, 00:05:21.421 "claimed": false, 00:05:21.421 "zoned": false, 00:05:21.421 "supported_io_types": { 00:05:21.421 "read": true, 00:05:21.421 "write": true, 00:05:21.421 "unmap": true, 00:05:21.421 "flush": true, 00:05:21.421 "reset": true, 00:05:21.421 "nvme_admin": false, 00:05:21.421 "nvme_io": false, 00:05:21.421 "nvme_io_md": false, 00:05:21.421 "write_zeroes": true, 00:05:21.421 "zcopy": true, 00:05:21.421 "get_zone_info": false, 00:05:21.421 "zone_management": false, 00:05:21.422 "zone_append": false, 00:05:21.422 "compare": false, 00:05:21.422 "compare_and_write": false, 00:05:21.422 "abort": true, 00:05:21.422 "seek_hole": false, 00:05:21.422 "seek_data": false, 00:05:21.422 "copy": true, 00:05:21.422 "nvme_iov_md": false 00:05:21.422 }, 00:05:21.422 "memory_domains": [ 00:05:21.422 { 00:05:21.422 "dma_device_id": "system", 00:05:21.422 "dma_device_type": 1 00:05:21.422 }, 00:05:21.422 { 00:05:21.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.422 "dma_device_type": 2 00:05:21.422 } 00:05:21.422 ], 00:05:21.422 "driver_specific": {} 00:05:21.422 } 00:05:21.422 ]' 00:05:21.422 12:08:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:21.682 12:08:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:21.683 12:08:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:21.683 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.683 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.683 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.683 12:08:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:21.683 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.683 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.683 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.683 12:08:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:21.683 12:08:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:21.683 ************************************ 00:05:21.683 END TEST rpc_plugins 00:05:21.683 ************************************ 00:05:21.683 12:08:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:21.683 00:05:21.683 real 0m0.123s 00:05:21.683 user 0m0.072s 00:05:21.683 sys 0m0.013s 00:05:21.683 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.683 12:08:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.683 12:08:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:21.683 12:08:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.683 12:08:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.683 12:08:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.683 ************************************ 00:05:21.683 START TEST rpc_trace_cmd_test 00:05:21.683 ************************************ 00:05:21.683 12:08:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:21.683 12:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:21.683 12:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:21.683 12:08:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.683 12:08:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:21.683 12:08:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.683 12:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:21.683 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57278", 00:05:21.683 "tpoint_group_mask": "0x8", 00:05:21.683 "iscsi_conn": { 00:05:21.683 "mask": "0x2", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "scsi": { 00:05:21.683 "mask": "0x4", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "bdev": { 00:05:21.683 "mask": "0x8", 00:05:21.683 "tpoint_mask": "0xffffffffffffffff" 00:05:21.683 }, 00:05:21.683 "nvmf_rdma": { 00:05:21.683 "mask": "0x10", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "nvmf_tcp": { 00:05:21.683 "mask": "0x20", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "ftl": { 00:05:21.683 "mask": "0x40", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "blobfs": { 00:05:21.683 "mask": "0x80", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "dsa": { 00:05:21.683 "mask": "0x200", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "thread": { 00:05:21.683 "mask": "0x400", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "nvme_pcie": { 00:05:21.683 "mask": "0x800", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "iaa": { 00:05:21.683 "mask": "0x1000", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "nvme_tcp": { 00:05:21.683 "mask": "0x2000", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "bdev_nvme": { 00:05:21.683 "mask": "0x4000", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "sock": { 00:05:21.683 "mask": "0x8000", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "blob": { 00:05:21.683 "mask": "0x10000", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "bdev_raid": { 00:05:21.683 "mask": "0x20000", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 }, 00:05:21.683 "scheduler": { 00:05:21.683 "mask": "0x40000", 00:05:21.683 "tpoint_mask": "0x0" 00:05:21.683 } 00:05:21.683 }' 00:05:21.683 12:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:21.683 12:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:21.683 12:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:21.683 12:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:21.683 12:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:21.683 12:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:21.683 12:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:21.944 12:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:21.944 12:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:21.944 ************************************ 00:05:21.944 END TEST rpc_trace_cmd_test 00:05:21.944 ************************************ 00:05:21.944 12:08:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:21.944 00:05:21.944 real 0m0.173s 00:05:21.944 user 0m0.148s 00:05:21.944 sys 0m0.016s 00:05:21.944 12:08:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.944 12:08:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:21.944 12:08:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:21.944 12:08:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:21.944 12:08:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:21.944 12:08:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.944 12:08:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.944 12:08:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.944 ************************************ 00:05:21.944 START TEST rpc_daemon_integrity 00:05:21.944 ************************************ 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:21.944 { 00:05:21.944 "name": "Malloc2", 00:05:21.944 "aliases": [ 00:05:21.944 "84808537-5f64-4eca-959a-3c16f7f8a7b8" 00:05:21.944 ], 00:05:21.944 "product_name": "Malloc disk", 00:05:21.944 "block_size": 512, 00:05:21.944 "num_blocks": 16384, 00:05:21.944 "uuid": "84808537-5f64-4eca-959a-3c16f7f8a7b8", 00:05:21.944 "assigned_rate_limits": { 00:05:21.944 "rw_ios_per_sec": 0, 00:05:21.944 "rw_mbytes_per_sec": 0, 00:05:21.944 "r_mbytes_per_sec": 0, 00:05:21.944 "w_mbytes_per_sec": 0 00:05:21.944 }, 00:05:21.944 "claimed": false, 00:05:21.944 "zoned": false, 00:05:21.944 "supported_io_types": { 00:05:21.944 "read": true, 00:05:21.944 "write": true, 00:05:21.944 "unmap": true, 00:05:21.944 "flush": true, 00:05:21.944 "reset": true, 00:05:21.944 "nvme_admin": false, 00:05:21.944 "nvme_io": false, 00:05:21.944 "nvme_io_md": false, 00:05:21.944 "write_zeroes": true, 00:05:21.944 "zcopy": true, 00:05:21.944 "get_zone_info": false, 00:05:21.944 "zone_management": false, 00:05:21.944 "zone_append": false, 00:05:21.944 "compare": false, 00:05:21.944 "compare_and_write": false, 00:05:21.944 "abort": true, 00:05:21.944 "seek_hole": false, 00:05:21.944 "seek_data": false, 00:05:21.944 "copy": true, 00:05:21.944 "nvme_iov_md": false 00:05:21.944 }, 00:05:21.944 "memory_domains": [ 00:05:21.944 { 00:05:21.944 "dma_device_id": "system", 00:05:21.944 "dma_device_type": 1 00:05:21.944 }, 00:05:21.944 { 00:05:21.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.944 "dma_device_type": 2 00:05:21.944 } 00:05:21.944 ], 00:05:21.944 "driver_specific": {} 00:05:21.944 } 00:05:21.944 ]' 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.944 [2024-12-05 12:08:52.765663] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:21.944 [2024-12-05 12:08:52.765726] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:21.944 [2024-12-05 12:08:52.765751] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:21.944 [2024-12-05 12:08:52.765764] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:21.944 [2024-12-05 12:08:52.768170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:21.944 [2024-12-05 12:08:52.768305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:21.944 Passthru0 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.944 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:21.944 { 00:05:21.945 "name": "Malloc2", 00:05:21.945 "aliases": [ 00:05:21.945 "84808537-5f64-4eca-959a-3c16f7f8a7b8" 00:05:21.945 ], 00:05:21.945 "product_name": "Malloc disk", 00:05:21.945 "block_size": 512, 00:05:21.945 "num_blocks": 16384, 00:05:21.945 "uuid": "84808537-5f64-4eca-959a-3c16f7f8a7b8", 00:05:21.945 "assigned_rate_limits": { 00:05:21.945 "rw_ios_per_sec": 0, 00:05:21.945 "rw_mbytes_per_sec": 0, 00:05:21.945 "r_mbytes_per_sec": 0, 00:05:21.945 "w_mbytes_per_sec": 0 00:05:21.945 }, 00:05:21.945 "claimed": true, 00:05:21.945 "claim_type": "exclusive_write", 00:05:21.945 "zoned": false, 00:05:21.945 "supported_io_types": { 00:05:21.945 "read": true, 00:05:21.945 "write": true, 00:05:21.945 "unmap": true, 00:05:21.945 "flush": true, 00:05:21.945 "reset": true, 00:05:21.945 "nvme_admin": false, 00:05:21.945 "nvme_io": false, 00:05:21.945 "nvme_io_md": false, 00:05:21.945 "write_zeroes": true, 00:05:21.945 "zcopy": true, 00:05:21.945 "get_zone_info": false, 00:05:21.945 "zone_management": false, 00:05:21.945 "zone_append": false, 00:05:21.945 "compare": false, 00:05:21.945 "compare_and_write": false, 00:05:21.945 "abort": true, 00:05:21.945 "seek_hole": false, 00:05:21.945 "seek_data": false, 00:05:21.945 "copy": true, 00:05:21.945 "nvme_iov_md": false 00:05:21.945 }, 00:05:21.945 "memory_domains": [ 00:05:21.945 { 00:05:21.945 "dma_device_id": "system", 00:05:21.945 "dma_device_type": 1 00:05:21.945 }, 00:05:21.945 { 00:05:21.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.945 "dma_device_type": 2 00:05:21.945 } 00:05:21.945 ], 00:05:21.945 "driver_specific": {} 00:05:21.945 }, 00:05:21.945 { 00:05:21.945 "name": "Passthru0", 00:05:21.945 "aliases": [ 00:05:21.945 "ee1d32a9-7eb6-529a-a57a-c7d4f1811f98" 00:05:21.945 ], 00:05:21.945 "product_name": "passthru", 00:05:21.945 "block_size": 512, 00:05:21.945 "num_blocks": 16384, 00:05:21.945 "uuid": "ee1d32a9-7eb6-529a-a57a-c7d4f1811f98", 00:05:21.945 "assigned_rate_limits": { 00:05:21.945 "rw_ios_per_sec": 0, 00:05:21.945 "rw_mbytes_per_sec": 0, 00:05:21.945 "r_mbytes_per_sec": 0, 00:05:21.945 "w_mbytes_per_sec": 0 00:05:21.945 }, 00:05:21.945 "claimed": false, 00:05:21.945 "zoned": false, 00:05:21.945 "supported_io_types": { 00:05:21.945 "read": true, 00:05:21.945 "write": true, 00:05:21.945 "unmap": true, 00:05:21.945 "flush": true, 00:05:21.945 "reset": true, 00:05:21.945 "nvme_admin": false, 00:05:21.945 "nvme_io": false, 00:05:21.945 "nvme_io_md": false, 00:05:21.945 "write_zeroes": true, 00:05:21.945 "zcopy": true, 00:05:21.945 "get_zone_info": false, 00:05:21.945 "zone_management": false, 00:05:21.945 "zone_append": false, 00:05:21.945 "compare": false, 00:05:21.945 "compare_and_write": false, 00:05:21.945 "abort": true, 00:05:21.945 "seek_hole": false, 00:05:21.945 "seek_data": false, 00:05:21.945 "copy": true, 00:05:21.945 "nvme_iov_md": false 00:05:21.945 }, 00:05:21.945 "memory_domains": [ 00:05:21.945 { 00:05:21.945 "dma_device_id": "system", 00:05:21.945 "dma_device_type": 1 00:05:21.945 }, 00:05:21.945 { 00:05:21.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.945 "dma_device_type": 2 00:05:21.945 } 00:05:21.945 ], 00:05:21.945 "driver_specific": { 00:05:21.945 "passthru": { 00:05:21.945 "name": "Passthru0", 00:05:21.945 "base_bdev_name": "Malloc2" 00:05:21.945 } 00:05:21.945 } 00:05:21.945 } 00:05:21.945 ]' 00:05:21.945 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:22.206 ************************************ 00:05:22.206 END TEST rpc_daemon_integrity 00:05:22.206 ************************************ 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:22.206 00:05:22.206 real 0m0.246s 00:05:22.206 user 0m0.130s 00:05:22.206 sys 0m0.039s 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.206 12:08:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.206 12:08:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:22.206 12:08:52 rpc -- rpc/rpc.sh@84 -- # killprocess 57278 00:05:22.206 12:08:52 rpc -- common/autotest_common.sh@954 -- # '[' -z 57278 ']' 00:05:22.206 12:08:52 rpc -- common/autotest_common.sh@958 -- # kill -0 57278 00:05:22.206 12:08:52 rpc -- common/autotest_common.sh@959 -- # uname 00:05:22.206 12:08:52 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.206 12:08:52 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57278 00:05:22.206 killing process with pid 57278 00:05:22.206 12:08:52 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.206 12:08:52 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.206 12:08:52 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57278' 00:05:22.206 12:08:52 rpc -- common/autotest_common.sh@973 -- # kill 57278 00:05:22.206 12:08:52 rpc -- common/autotest_common.sh@978 -- # wait 57278 00:05:24.120 00:05:24.120 real 0m3.827s 00:05:24.120 user 0m4.192s 00:05:24.120 sys 0m0.709s 00:05:24.120 ************************************ 00:05:24.120 END TEST rpc 00:05:24.120 ************************************ 00:05:24.120 12:08:54 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.120 12:08:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.120 12:08:54 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:24.120 12:08:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.120 12:08:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.120 12:08:54 -- common/autotest_common.sh@10 -- # set +x 00:05:24.120 ************************************ 00:05:24.120 START TEST skip_rpc 00:05:24.120 ************************************ 00:05:24.120 12:08:54 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:24.120 * Looking for test storage... 00:05:24.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:24.120 12:08:54 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.120 12:08:54 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.120 12:08:54 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.120 12:08:54 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.120 12:08:54 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:24.120 12:08:54 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.120 12:08:54 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.120 --rc genhtml_branch_coverage=1 00:05:24.120 --rc genhtml_function_coverage=1 00:05:24.120 --rc genhtml_legend=1 00:05:24.120 --rc geninfo_all_blocks=1 00:05:24.120 --rc geninfo_unexecuted_blocks=1 00:05:24.120 00:05:24.120 ' 00:05:24.120 12:08:54 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.120 --rc genhtml_branch_coverage=1 00:05:24.120 --rc genhtml_function_coverage=1 00:05:24.120 --rc genhtml_legend=1 00:05:24.120 --rc geninfo_all_blocks=1 00:05:24.120 --rc geninfo_unexecuted_blocks=1 00:05:24.120 00:05:24.120 ' 00:05:24.121 12:08:54 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.121 --rc genhtml_branch_coverage=1 00:05:24.121 --rc genhtml_function_coverage=1 00:05:24.121 --rc genhtml_legend=1 00:05:24.121 --rc geninfo_all_blocks=1 00:05:24.121 --rc geninfo_unexecuted_blocks=1 00:05:24.121 00:05:24.121 ' 00:05:24.121 12:08:54 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.121 --rc genhtml_branch_coverage=1 00:05:24.121 --rc genhtml_function_coverage=1 00:05:24.121 --rc genhtml_legend=1 00:05:24.121 --rc geninfo_all_blocks=1 00:05:24.121 --rc geninfo_unexecuted_blocks=1 00:05:24.121 00:05:24.121 ' 00:05:24.121 12:08:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:24.121 12:08:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:24.121 12:08:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:24.121 12:08:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.121 12:08:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.121 12:08:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.121 ************************************ 00:05:24.121 START TEST skip_rpc 00:05:24.121 ************************************ 00:05:24.121 12:08:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:24.121 12:08:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57496 00:05:24.121 12:08:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.121 12:08:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:24.121 12:08:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:24.121 [2024-12-05 12:08:54.855295] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:24.121 [2024-12-05 12:08:54.855433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57496 ] 00:05:24.381 [2024-12-05 12:08:55.013952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.381 [2024-12-05 12:08:55.131816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57496 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57496 ']' 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57496 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57496 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57496' 00:05:29.659 killing process with pid 57496 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57496 00:05:29.659 12:08:59 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57496 00:05:30.226 00:05:30.226 ************************************ 00:05:30.226 END TEST skip_rpc 00:05:30.226 ************************************ 00:05:30.226 real 0m6.299s 00:05:30.226 user 0m5.866s 00:05:30.226 sys 0m0.330s 00:05:30.226 12:09:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.226 12:09:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.487 12:09:01 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:30.487 12:09:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.487 12:09:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.487 12:09:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.487 ************************************ 00:05:30.487 START TEST skip_rpc_with_json 00:05:30.487 ************************************ 00:05:30.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.487 12:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:30.487 12:09:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:30.487 12:09:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57589 00:05:30.487 12:09:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.487 12:09:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57589 00:05:30.487 12:09:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.487 12:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57589 ']' 00:05:30.487 12:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.487 12:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.487 12:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.487 12:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.487 12:09:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.487 [2024-12-05 12:09:01.219419] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:30.487 [2024-12-05 12:09:01.220624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57589 ] 00:05:30.747 [2024-12-05 12:09:01.397187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.747 [2024-12-05 12:09:01.510093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.319 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.319 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:31.319 12:09:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:31.319 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.319 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.319 [2024-12-05 12:09:02.174800] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:31.319 request: 00:05:31.319 { 00:05:31.319 "trtype": "tcp", 00:05:31.319 "method": "nvmf_get_transports", 00:05:31.319 "req_id": 1 00:05:31.319 } 00:05:31.319 Got JSON-RPC error response 00:05:31.319 response: 00:05:31.319 { 00:05:31.319 "code": -19, 00:05:31.319 "message": "No such device" 00:05:31.319 } 00:05:31.319 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:31.319 12:09:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:31.319 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.319 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.319 [2024-12-05 12:09:02.182913] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:31.579 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.579 12:09:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:31.579 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.579 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:31.579 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.579 12:09:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:31.579 { 00:05:31.579 "subsystems": [ 00:05:31.579 { 00:05:31.579 "subsystem": "fsdev", 00:05:31.579 "config": [ 00:05:31.579 { 00:05:31.579 "method": "fsdev_set_opts", 00:05:31.579 "params": { 00:05:31.579 "fsdev_io_pool_size": 65535, 00:05:31.579 "fsdev_io_cache_size": 256 00:05:31.579 } 00:05:31.579 } 00:05:31.579 ] 00:05:31.579 }, 00:05:31.579 { 00:05:31.579 "subsystem": "keyring", 00:05:31.579 "config": [] 00:05:31.579 }, 00:05:31.579 { 00:05:31.579 "subsystem": "iobuf", 00:05:31.579 "config": [ 00:05:31.579 { 00:05:31.579 "method": "iobuf_set_options", 00:05:31.579 "params": { 00:05:31.579 "small_pool_count": 8192, 00:05:31.579 "large_pool_count": 1024, 00:05:31.579 "small_bufsize": 8192, 00:05:31.580 "large_bufsize": 135168, 00:05:31.580 "enable_numa": false 00:05:31.580 } 00:05:31.580 } 00:05:31.580 ] 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "subsystem": "sock", 00:05:31.580 "config": [ 00:05:31.580 { 00:05:31.580 "method": "sock_set_default_impl", 00:05:31.580 "params": { 00:05:31.580 "impl_name": "posix" 00:05:31.580 } 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "method": "sock_impl_set_options", 00:05:31.580 "params": { 00:05:31.580 "impl_name": "ssl", 00:05:31.580 "recv_buf_size": 4096, 00:05:31.580 "send_buf_size": 4096, 00:05:31.580 "enable_recv_pipe": true, 00:05:31.580 "enable_quickack": false, 00:05:31.580 "enable_placement_id": 0, 00:05:31.580 "enable_zerocopy_send_server": true, 00:05:31.580 "enable_zerocopy_send_client": false, 00:05:31.580 "zerocopy_threshold": 0, 00:05:31.580 "tls_version": 0, 00:05:31.580 "enable_ktls": false 00:05:31.580 } 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "method": "sock_impl_set_options", 00:05:31.580 "params": { 00:05:31.580 "impl_name": "posix", 00:05:31.580 "recv_buf_size": 2097152, 00:05:31.580 "send_buf_size": 2097152, 00:05:31.580 "enable_recv_pipe": true, 00:05:31.580 "enable_quickack": false, 00:05:31.580 "enable_placement_id": 0, 00:05:31.580 "enable_zerocopy_send_server": true, 00:05:31.580 "enable_zerocopy_send_client": false, 00:05:31.580 "zerocopy_threshold": 0, 00:05:31.580 "tls_version": 0, 00:05:31.580 "enable_ktls": false 00:05:31.580 } 00:05:31.580 } 00:05:31.580 ] 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "subsystem": "vmd", 00:05:31.580 "config": [] 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "subsystem": "accel", 00:05:31.580 "config": [ 00:05:31.580 { 00:05:31.580 "method": "accel_set_options", 00:05:31.580 "params": { 00:05:31.580 "small_cache_size": 128, 00:05:31.580 "large_cache_size": 16, 00:05:31.580 "task_count": 2048, 00:05:31.580 "sequence_count": 2048, 00:05:31.580 "buf_count": 2048 00:05:31.580 } 00:05:31.580 } 00:05:31.580 ] 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "subsystem": "bdev", 00:05:31.580 "config": [ 00:05:31.580 { 00:05:31.580 "method": "bdev_set_options", 00:05:31.580 "params": { 00:05:31.580 "bdev_io_pool_size": 65535, 00:05:31.580 "bdev_io_cache_size": 256, 00:05:31.580 "bdev_auto_examine": true, 00:05:31.580 "iobuf_small_cache_size": 128, 00:05:31.580 "iobuf_large_cache_size": 16 00:05:31.580 } 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "method": "bdev_raid_set_options", 00:05:31.580 "params": { 00:05:31.580 "process_window_size_kb": 1024, 00:05:31.580 "process_max_bandwidth_mb_sec": 0 00:05:31.580 } 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "method": "bdev_iscsi_set_options", 00:05:31.580 "params": { 00:05:31.580 "timeout_sec": 30 00:05:31.580 } 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "method": "bdev_nvme_set_options", 00:05:31.580 "params": { 00:05:31.580 "action_on_timeout": "none", 00:05:31.580 "timeout_us": 0, 00:05:31.580 "timeout_admin_us": 0, 00:05:31.580 "keep_alive_timeout_ms": 10000, 00:05:31.580 "arbitration_burst": 0, 00:05:31.580 "low_priority_weight": 0, 00:05:31.580 "medium_priority_weight": 0, 00:05:31.580 "high_priority_weight": 0, 00:05:31.580 "nvme_adminq_poll_period_us": 10000, 00:05:31.580 "nvme_ioq_poll_period_us": 0, 00:05:31.580 "io_queue_requests": 0, 00:05:31.580 "delay_cmd_submit": true, 00:05:31.580 "transport_retry_count": 4, 00:05:31.580 "bdev_retry_count": 3, 00:05:31.580 "transport_ack_timeout": 0, 00:05:31.580 "ctrlr_loss_timeout_sec": 0, 00:05:31.580 "reconnect_delay_sec": 0, 00:05:31.580 "fast_io_fail_timeout_sec": 0, 00:05:31.580 "disable_auto_failback": false, 00:05:31.580 "generate_uuids": false, 00:05:31.580 "transport_tos": 0, 00:05:31.580 "nvme_error_stat": false, 00:05:31.580 "rdma_srq_size": 0, 00:05:31.580 "io_path_stat": false, 00:05:31.580 "allow_accel_sequence": false, 00:05:31.580 "rdma_max_cq_size": 0, 00:05:31.580 "rdma_cm_event_timeout_ms": 0, 00:05:31.580 "dhchap_digests": [ 00:05:31.580 "sha256", 00:05:31.580 "sha384", 00:05:31.580 "sha512" 00:05:31.580 ], 00:05:31.580 "dhchap_dhgroups": [ 00:05:31.580 "null", 00:05:31.580 "ffdhe2048", 00:05:31.580 "ffdhe3072", 00:05:31.580 "ffdhe4096", 00:05:31.580 "ffdhe6144", 00:05:31.580 "ffdhe8192" 00:05:31.580 ] 00:05:31.580 } 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "method": "bdev_nvme_set_hotplug", 00:05:31.580 "params": { 00:05:31.580 "period_us": 100000, 00:05:31.580 "enable": false 00:05:31.580 } 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "method": "bdev_wait_for_examine" 00:05:31.580 } 00:05:31.580 ] 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "subsystem": "scsi", 00:05:31.580 "config": null 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "subsystem": "scheduler", 00:05:31.580 "config": [ 00:05:31.580 { 00:05:31.580 "method": "framework_set_scheduler", 00:05:31.580 "params": { 00:05:31.580 "name": "static" 00:05:31.580 } 00:05:31.580 } 00:05:31.580 ] 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "subsystem": "vhost_scsi", 00:05:31.580 "config": [] 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "subsystem": "vhost_blk", 00:05:31.580 "config": [] 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "subsystem": "ublk", 00:05:31.580 "config": [] 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "subsystem": "nbd", 00:05:31.580 "config": [] 00:05:31.580 }, 00:05:31.580 { 00:05:31.580 "subsystem": "nvmf", 00:05:31.580 "config": [ 00:05:31.580 { 00:05:31.580 "method": "nvmf_set_config", 00:05:31.580 "params": { 00:05:31.580 "discovery_filter": "match_any", 00:05:31.580 "admin_cmd_passthru": { 00:05:31.580 "identify_ctrlr": false 00:05:31.580 }, 00:05:31.580 "dhchap_digests": [ 00:05:31.580 "sha256", 00:05:31.580 "sha384", 00:05:31.581 "sha512" 00:05:31.581 ], 00:05:31.581 "dhchap_dhgroups": [ 00:05:31.581 "null", 00:05:31.581 "ffdhe2048", 00:05:31.581 "ffdhe3072", 00:05:31.581 "ffdhe4096", 00:05:31.581 "ffdhe6144", 00:05:31.581 "ffdhe8192" 00:05:31.581 ] 00:05:31.581 } 00:05:31.581 }, 00:05:31.581 { 00:05:31.581 "method": "nvmf_set_max_subsystems", 00:05:31.581 "params": { 00:05:31.581 "max_subsystems": 1024 00:05:31.581 } 00:05:31.581 }, 00:05:31.581 { 00:05:31.581 "method": "nvmf_set_crdt", 00:05:31.581 "params": { 00:05:31.581 "crdt1": 0, 00:05:31.581 "crdt2": 0, 00:05:31.581 "crdt3": 0 00:05:31.581 } 00:05:31.581 }, 00:05:31.581 { 00:05:31.581 "method": "nvmf_create_transport", 00:05:31.581 "params": { 00:05:31.581 "trtype": "TCP", 00:05:31.581 "max_queue_depth": 128, 00:05:31.581 "max_io_qpairs_per_ctrlr": 127, 00:05:31.581 "in_capsule_data_size": 4096, 00:05:31.581 "max_io_size": 131072, 00:05:31.581 "io_unit_size": 131072, 00:05:31.581 "max_aq_depth": 128, 00:05:31.581 "num_shared_buffers": 511, 00:05:31.581 "buf_cache_size": 4294967295, 00:05:31.581 "dif_insert_or_strip": false, 00:05:31.581 "zcopy": false, 00:05:31.581 "c2h_success": true, 00:05:31.581 "sock_priority": 0, 00:05:31.581 "abort_timeout_sec": 1, 00:05:31.581 "ack_timeout": 0, 00:05:31.581 "data_wr_pool_size": 0 00:05:31.581 } 00:05:31.581 } 00:05:31.581 ] 00:05:31.581 }, 00:05:31.581 { 00:05:31.581 "subsystem": "iscsi", 00:05:31.581 "config": [ 00:05:31.581 { 00:05:31.581 "method": "iscsi_set_options", 00:05:31.581 "params": { 00:05:31.581 "node_base": "iqn.2016-06.io.spdk", 00:05:31.581 "max_sessions": 128, 00:05:31.581 "max_connections_per_session": 2, 00:05:31.581 "max_queue_depth": 64, 00:05:31.581 "default_time2wait": 2, 00:05:31.581 "default_time2retain": 20, 00:05:31.581 "first_burst_length": 8192, 00:05:31.581 "immediate_data": true, 00:05:31.581 "allow_duplicated_isid": false, 00:05:31.581 "error_recovery_level": 0, 00:05:31.581 "nop_timeout": 60, 00:05:31.581 "nop_in_interval": 30, 00:05:31.581 "disable_chap": false, 00:05:31.581 "require_chap": false, 00:05:31.581 "mutual_chap": false, 00:05:31.581 "chap_group": 0, 00:05:31.581 "max_large_datain_per_connection": 64, 00:05:31.581 "max_r2t_per_connection": 4, 00:05:31.581 "pdu_pool_size": 36864, 00:05:31.581 "immediate_data_pool_size": 16384, 00:05:31.581 "data_out_pool_size": 2048 00:05:31.581 } 00:05:31.581 } 00:05:31.581 ] 00:05:31.581 } 00:05:31.581 ] 00:05:31.581 } 00:05:31.581 12:09:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:31.581 12:09:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57589 00:05:31.581 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57589 ']' 00:05:31.581 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57589 00:05:31.581 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:31.581 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.581 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57589 00:05:31.581 killing process with pid 57589 00:05:31.581 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.581 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.581 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57589' 00:05:31.581 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57589 00:05:31.581 12:09:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57589 00:05:33.488 12:09:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57634 00:05:33.488 12:09:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:33.488 12:09:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:38.749 12:09:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57634 00:05:38.749 12:09:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57634 ']' 00:05:38.749 12:09:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57634 00:05:38.749 12:09:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:38.749 12:09:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.749 12:09:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57634 00:05:38.749 killing process with pid 57634 00:05:38.749 12:09:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.749 12:09:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.749 12:09:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57634' 00:05:38.749 12:09:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57634 00:05:38.749 12:09:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57634 00:05:39.741 12:09:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:39.741 12:09:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:39.741 ************************************ 00:05:39.741 END TEST skip_rpc_with_json 00:05:39.741 ************************************ 00:05:39.741 00:05:39.741 real 0m9.197s 00:05:39.741 user 0m8.711s 00:05:39.741 sys 0m0.721s 00:05:39.741 12:09:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.741 12:09:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.742 12:09:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:39.742 12:09:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.742 12:09:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.742 12:09:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.742 ************************************ 00:05:39.742 START TEST skip_rpc_with_delay 00:05:39.742 ************************************ 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.742 [2024-12-05 12:09:10.452710] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.742 00:05:39.742 real 0m0.124s 00:05:39.742 user 0m0.057s 00:05:39.742 sys 0m0.066s 00:05:39.742 ************************************ 00:05:39.742 END TEST skip_rpc_with_delay 00:05:39.742 ************************************ 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.742 12:09:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:39.742 12:09:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:39.742 12:09:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:39.742 12:09:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:39.742 12:09:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.742 12:09:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.742 12:09:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.742 ************************************ 00:05:39.742 START TEST exit_on_failed_rpc_init 00:05:39.742 ************************************ 00:05:39.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.742 12:09:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:39.742 12:09:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57751 00:05:39.742 12:09:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57751 00:05:39.742 12:09:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57751 ']' 00:05:39.742 12:09:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.742 12:09:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.742 12:09:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.742 12:09:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.742 12:09:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.742 12:09:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.001 [2024-12-05 12:09:10.641183] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:40.001 [2024-12-05 12:09:10.641312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57751 ] 00:05:40.001 [2024-12-05 12:09:10.803372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.260 [2024-12-05 12:09:10.921282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:40.828 12:09:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.828 [2024-12-05 12:09:11.663128] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:40.828 [2024-12-05 12:09:11.663441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57769 ] 00:05:41.089 [2024-12-05 12:09:11.825103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.089 [2024-12-05 12:09:11.945141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.089 [2024-12-05 12:09:11.945250] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:41.089 [2024-12-05 12:09:11.945264] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:41.089 [2024-12-05 12:09:11.945279] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57751 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57751 ']' 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57751 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57751 00:05:41.351 killing process with pid 57751 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57751' 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57751 00:05:41.351 12:09:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57751 00:05:43.262 ************************************ 00:05:43.262 END TEST exit_on_failed_rpc_init 00:05:43.262 ************************************ 00:05:43.262 00:05:43.262 real 0m3.265s 00:05:43.262 user 0m3.590s 00:05:43.262 sys 0m0.498s 00:05:43.262 12:09:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.262 12:09:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.262 12:09:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:43.262 00:05:43.262 real 0m19.255s 00:05:43.262 user 0m18.358s 00:05:43.262 sys 0m1.808s 00:05:43.262 12:09:13 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.262 ************************************ 00:05:43.262 END TEST skip_rpc 00:05:43.262 ************************************ 00:05:43.262 12:09:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.262 12:09:13 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:43.262 12:09:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.262 12:09:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.262 12:09:13 -- common/autotest_common.sh@10 -- # set +x 00:05:43.262 ************************************ 00:05:43.262 START TEST rpc_client 00:05:43.262 ************************************ 00:05:43.262 12:09:13 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:43.262 * Looking for test storage... 00:05:43.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:43.262 12:09:13 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.262 12:09:13 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.262 12:09:13 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.262 12:09:14 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.262 12:09:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:43.262 12:09:14 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.262 12:09:14 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.262 --rc genhtml_branch_coverage=1 00:05:43.262 --rc genhtml_function_coverage=1 00:05:43.262 --rc genhtml_legend=1 00:05:43.262 --rc geninfo_all_blocks=1 00:05:43.262 --rc geninfo_unexecuted_blocks=1 00:05:43.262 00:05:43.262 ' 00:05:43.262 12:09:14 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.262 --rc genhtml_branch_coverage=1 00:05:43.262 --rc genhtml_function_coverage=1 00:05:43.262 --rc genhtml_legend=1 00:05:43.262 --rc geninfo_all_blocks=1 00:05:43.262 --rc geninfo_unexecuted_blocks=1 00:05:43.262 00:05:43.262 ' 00:05:43.262 12:09:14 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.262 --rc genhtml_branch_coverage=1 00:05:43.262 --rc genhtml_function_coverage=1 00:05:43.262 --rc genhtml_legend=1 00:05:43.262 --rc geninfo_all_blocks=1 00:05:43.262 --rc geninfo_unexecuted_blocks=1 00:05:43.262 00:05:43.262 ' 00:05:43.262 12:09:14 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.262 --rc genhtml_branch_coverage=1 00:05:43.262 --rc genhtml_function_coverage=1 00:05:43.262 --rc genhtml_legend=1 00:05:43.262 --rc geninfo_all_blocks=1 00:05:43.262 --rc geninfo_unexecuted_blocks=1 00:05:43.262 00:05:43.262 ' 00:05:43.262 12:09:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:43.262 OK 00:05:43.262 12:09:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:43.262 ************************************ 00:05:43.262 END TEST rpc_client 00:05:43.262 ************************************ 00:05:43.262 00:05:43.262 real 0m0.196s 00:05:43.262 user 0m0.115s 00:05:43.262 sys 0m0.087s 00:05:43.262 12:09:14 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.262 12:09:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:43.523 12:09:14 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:43.523 12:09:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.523 12:09:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.523 12:09:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.523 ************************************ 00:05:43.523 START TEST json_config 00:05:43.523 ************************************ 00:05:43.523 12:09:14 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:43.523 12:09:14 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.523 12:09:14 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.523 12:09:14 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.523 12:09:14 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.523 12:09:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.523 12:09:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.523 12:09:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.523 12:09:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.523 12:09:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.523 12:09:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.523 12:09:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.523 12:09:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.523 12:09:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.523 12:09:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.523 12:09:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.523 12:09:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:43.523 12:09:14 json_config -- scripts/common.sh@345 -- # : 1 00:05:43.523 12:09:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.523 12:09:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.524 12:09:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:43.524 12:09:14 json_config -- scripts/common.sh@353 -- # local d=1 00:05:43.524 12:09:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.524 12:09:14 json_config -- scripts/common.sh@355 -- # echo 1 00:05:43.524 12:09:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.524 12:09:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:43.524 12:09:14 json_config -- scripts/common.sh@353 -- # local d=2 00:05:43.524 12:09:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.524 12:09:14 json_config -- scripts/common.sh@355 -- # echo 2 00:05:43.524 12:09:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.524 12:09:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.524 12:09:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.524 12:09:14 json_config -- scripts/common.sh@368 -- # return 0 00:05:43.524 12:09:14 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.524 12:09:14 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.524 --rc genhtml_branch_coverage=1 00:05:43.524 --rc genhtml_function_coverage=1 00:05:43.524 --rc genhtml_legend=1 00:05:43.524 --rc geninfo_all_blocks=1 00:05:43.524 --rc geninfo_unexecuted_blocks=1 00:05:43.524 00:05:43.524 ' 00:05:43.524 12:09:14 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.524 --rc genhtml_branch_coverage=1 00:05:43.524 --rc genhtml_function_coverage=1 00:05:43.524 --rc genhtml_legend=1 00:05:43.524 --rc geninfo_all_blocks=1 00:05:43.524 --rc geninfo_unexecuted_blocks=1 00:05:43.524 00:05:43.524 ' 00:05:43.524 12:09:14 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.524 --rc genhtml_branch_coverage=1 00:05:43.524 --rc genhtml_function_coverage=1 00:05:43.524 --rc genhtml_legend=1 00:05:43.524 --rc geninfo_all_blocks=1 00:05:43.524 --rc geninfo_unexecuted_blocks=1 00:05:43.524 00:05:43.524 ' 00:05:43.524 12:09:14 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.524 --rc genhtml_branch_coverage=1 00:05:43.524 --rc genhtml_function_coverage=1 00:05:43.524 --rc genhtml_legend=1 00:05:43.524 --rc geninfo_all_blocks=1 00:05:43.524 --rc geninfo_unexecuted_blocks=1 00:05:43.524 00:05:43.524 ' 00:05:43.524 12:09:14 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3fadf30e-042d-4555-8c89-3612ece365ef 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=3fadf30e-042d-4555-8c89-3612ece365ef 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.524 12:09:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:43.524 12:09:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.524 12:09:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.524 12:09:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.524 12:09:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.524 12:09:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.524 12:09:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.524 12:09:14 json_config -- paths/export.sh@5 -- # export PATH 00:05:43.524 12:09:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@51 -- # : 0 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:43.524 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:43.524 12:09:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:43.524 12:09:14 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:43.524 12:09:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:43.524 12:09:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:43.524 12:09:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:43.524 12:09:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:43.524 12:09:14 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:43.524 WARNING: No tests are enabled so not running JSON configuration tests 00:05:43.524 12:09:14 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:43.524 00:05:43.524 real 0m0.140s 00:05:43.524 user 0m0.094s 00:05:43.524 sys 0m0.047s 00:05:43.524 12:09:14 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.524 12:09:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.524 ************************************ 00:05:43.524 END TEST json_config 00:05:43.524 ************************************ 00:05:43.524 12:09:14 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:43.524 12:09:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.524 12:09:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.524 12:09:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.524 ************************************ 00:05:43.524 START TEST json_config_extra_key 00:05:43.524 ************************************ 00:05:43.524 12:09:14 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:43.524 12:09:14 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.524 12:09:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.524 12:09:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.785 12:09:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.785 12:09:14 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.785 12:09:14 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.785 12:09:14 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.785 12:09:14 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.785 12:09:14 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.785 12:09:14 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.785 12:09:14 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.785 12:09:14 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.785 12:09:14 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.785 12:09:14 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.785 12:09:14 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.785 12:09:14 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:43.785 12:09:14 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:43.785 12:09:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:43.786 12:09:14 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.786 12:09:14 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.786 --rc genhtml_branch_coverage=1 00:05:43.786 --rc genhtml_function_coverage=1 00:05:43.786 --rc genhtml_legend=1 00:05:43.786 --rc geninfo_all_blocks=1 00:05:43.786 --rc geninfo_unexecuted_blocks=1 00:05:43.786 00:05:43.786 ' 00:05:43.786 12:09:14 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.786 --rc genhtml_branch_coverage=1 00:05:43.786 --rc genhtml_function_coverage=1 00:05:43.786 --rc genhtml_legend=1 00:05:43.786 --rc geninfo_all_blocks=1 00:05:43.786 --rc geninfo_unexecuted_blocks=1 00:05:43.786 00:05:43.786 ' 00:05:43.786 12:09:14 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.786 --rc genhtml_branch_coverage=1 00:05:43.786 --rc genhtml_function_coverage=1 00:05:43.786 --rc genhtml_legend=1 00:05:43.786 --rc geninfo_all_blocks=1 00:05:43.786 --rc geninfo_unexecuted_blocks=1 00:05:43.786 00:05:43.786 ' 00:05:43.786 12:09:14 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.786 --rc genhtml_branch_coverage=1 00:05:43.786 --rc genhtml_function_coverage=1 00:05:43.786 --rc genhtml_legend=1 00:05:43.786 --rc geninfo_all_blocks=1 00:05:43.786 --rc geninfo_unexecuted_blocks=1 00:05:43.786 00:05:43.786 ' 00:05:43.786 12:09:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3fadf30e-042d-4555-8c89-3612ece365ef 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=3fadf30e-042d-4555-8c89-3612ece365ef 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.786 12:09:14 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.786 12:09:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.786 12:09:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.786 12:09:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.786 12:09:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:43.786 12:09:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:43.786 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:43.786 12:09:14 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:43.786 12:09:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:43.786 12:09:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:43.786 12:09:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:43.786 12:09:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:43.786 12:09:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:43.786 12:09:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:43.786 12:09:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:43.786 12:09:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:43.786 12:09:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:43.786 12:09:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.786 12:09:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:43.786 INFO: launching applications... 00:05:43.786 12:09:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:43.786 12:09:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:43.786 12:09:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:43.786 12:09:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:43.786 12:09:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:43.786 12:09:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:43.786 12:09:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.786 12:09:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.786 12:09:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57968 00:05:43.786 12:09:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:43.786 Waiting for target to run... 00:05:43.786 12:09:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57968 /var/tmp/spdk_tgt.sock 00:05:43.786 12:09:14 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57968 ']' 00:05:43.786 12:09:14 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.786 12:09:14 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.786 12:09:14 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.786 12:09:14 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.786 12:09:14 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:43.786 12:09:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.786 [2024-12-05 12:09:14.561370] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:43.787 [2024-12-05 12:09:14.561678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57968 ] 00:05:44.355 [2024-12-05 12:09:14.935405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.355 [2024-12-05 12:09:15.045206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.925 12:09:15 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.925 12:09:15 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:44.925 12:09:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:44.925 00:05:44.925 INFO: shutting down applications... 00:05:44.925 12:09:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:44.925 12:09:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:44.925 12:09:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:44.925 12:09:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:44.925 12:09:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57968 ]] 00:05:44.925 12:09:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57968 00:05:44.925 12:09:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:44.925 12:09:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.925 12:09:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57968 00:05:44.925 12:09:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.506 12:09:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.506 12:09:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.506 12:09:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57968 00:05:45.506 12:09:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.766 12:09:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.766 12:09:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.766 12:09:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57968 00:05:45.766 12:09:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.336 12:09:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.336 12:09:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.336 12:09:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57968 00:05:46.336 12:09:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.906 12:09:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.906 12:09:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.906 12:09:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57968 00:05:46.906 12:09:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:46.906 12:09:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:46.906 SPDK target shutdown done 00:05:46.906 12:09:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:46.906 12:09:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:46.906 Success 00:05:46.906 12:09:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:46.906 ************************************ 00:05:46.906 END TEST json_config_extra_key 00:05:46.907 ************************************ 00:05:46.907 00:05:46.907 real 0m3.254s 00:05:46.907 user 0m2.911s 00:05:46.907 sys 0m0.456s 00:05:46.907 12:09:17 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.907 12:09:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:46.907 12:09:17 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:46.907 12:09:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.907 12:09:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.907 12:09:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.907 ************************************ 00:05:46.907 START TEST alias_rpc 00:05:46.907 ************************************ 00:05:46.907 12:09:17 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:46.907 * Looking for test storage... 00:05:46.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:46.907 12:09:17 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.907 12:09:17 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.907 12:09:17 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.168 12:09:17 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:47.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.168 12:09:17 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:47.168 12:09:17 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.168 12:09:17 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.168 --rc genhtml_branch_coverage=1 00:05:47.168 --rc genhtml_function_coverage=1 00:05:47.168 --rc genhtml_legend=1 00:05:47.168 --rc geninfo_all_blocks=1 00:05:47.168 --rc geninfo_unexecuted_blocks=1 00:05:47.168 00:05:47.168 ' 00:05:47.168 12:09:17 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.168 --rc genhtml_branch_coverage=1 00:05:47.168 --rc genhtml_function_coverage=1 00:05:47.168 --rc genhtml_legend=1 00:05:47.168 --rc geninfo_all_blocks=1 00:05:47.168 --rc geninfo_unexecuted_blocks=1 00:05:47.168 00:05:47.168 ' 00:05:47.168 12:09:17 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.168 --rc genhtml_branch_coverage=1 00:05:47.168 --rc genhtml_function_coverage=1 00:05:47.168 --rc genhtml_legend=1 00:05:47.168 --rc geninfo_all_blocks=1 00:05:47.168 --rc geninfo_unexecuted_blocks=1 00:05:47.168 00:05:47.168 ' 00:05:47.168 12:09:17 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.168 --rc genhtml_branch_coverage=1 00:05:47.168 --rc genhtml_function_coverage=1 00:05:47.168 --rc genhtml_legend=1 00:05:47.168 --rc geninfo_all_blocks=1 00:05:47.168 --rc geninfo_unexecuted_blocks=1 00:05:47.168 00:05:47.168 ' 00:05:47.168 12:09:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:47.168 12:09:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58061 00:05:47.168 12:09:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58061 00:05:47.168 12:09:17 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58061 ']' 00:05:47.168 12:09:17 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.168 12:09:17 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.168 12:09:17 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.168 12:09:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.168 12:09:17 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.168 12:09:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.168 [2024-12-05 12:09:17.869159] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:47.168 [2024-12-05 12:09:17.869802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58061 ] 00:05:47.168 [2024-12-05 12:09:18.031997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.429 [2024-12-05 12:09:18.147238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.001 12:09:18 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.001 12:09:18 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:48.001 12:09:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:48.263 12:09:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58061 00:05:48.263 12:09:19 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58061 ']' 00:05:48.263 12:09:19 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58061 00:05:48.263 12:09:19 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:48.263 12:09:19 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.263 12:09:19 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58061 00:05:48.263 killing process with pid 58061 00:05:48.263 12:09:19 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.263 12:09:19 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.263 12:09:19 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58061' 00:05:48.263 12:09:19 alias_rpc -- common/autotest_common.sh@973 -- # kill 58061 00:05:48.263 12:09:19 alias_rpc -- common/autotest_common.sh@978 -- # wait 58061 00:05:50.178 ************************************ 00:05:50.178 END TEST alias_rpc 00:05:50.178 ************************************ 00:05:50.178 00:05:50.178 real 0m2.991s 00:05:50.178 user 0m3.034s 00:05:50.178 sys 0m0.457s 00:05:50.178 12:09:20 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.178 12:09:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.178 12:09:20 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:50.178 12:09:20 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:50.178 12:09:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.178 12:09:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.178 12:09:20 -- common/autotest_common.sh@10 -- # set +x 00:05:50.178 ************************************ 00:05:50.178 START TEST spdkcli_tcp 00:05:50.178 ************************************ 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:50.178 * Looking for test storage... 00:05:50.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.178 12:09:20 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.178 --rc genhtml_branch_coverage=1 00:05:50.178 --rc genhtml_function_coverage=1 00:05:50.178 --rc genhtml_legend=1 00:05:50.178 --rc geninfo_all_blocks=1 00:05:50.178 --rc geninfo_unexecuted_blocks=1 00:05:50.178 00:05:50.178 ' 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.178 --rc genhtml_branch_coverage=1 00:05:50.178 --rc genhtml_function_coverage=1 00:05:50.178 --rc genhtml_legend=1 00:05:50.178 --rc geninfo_all_blocks=1 00:05:50.178 --rc geninfo_unexecuted_blocks=1 00:05:50.178 00:05:50.178 ' 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.178 --rc genhtml_branch_coverage=1 00:05:50.178 --rc genhtml_function_coverage=1 00:05:50.178 --rc genhtml_legend=1 00:05:50.178 --rc geninfo_all_blocks=1 00:05:50.178 --rc geninfo_unexecuted_blocks=1 00:05:50.178 00:05:50.178 ' 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.178 --rc genhtml_branch_coverage=1 00:05:50.178 --rc genhtml_function_coverage=1 00:05:50.178 --rc genhtml_legend=1 00:05:50.178 --rc geninfo_all_blocks=1 00:05:50.178 --rc geninfo_unexecuted_blocks=1 00:05:50.178 00:05:50.178 ' 00:05:50.178 12:09:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:50.178 12:09:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:50.178 12:09:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:50.178 12:09:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:50.178 12:09:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:50.178 12:09:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:50.178 12:09:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.178 12:09:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58157 00:05:50.178 12:09:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58157 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58157 ']' 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.178 12:09:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.178 12:09:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:50.178 [2024-12-05 12:09:20.916043] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:50.178 [2024-12-05 12:09:20.916162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58157 ] 00:05:50.439 [2024-12-05 12:09:21.077116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.439 [2024-12-05 12:09:21.190362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.439 [2024-12-05 12:09:21.190394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.011 12:09:21 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.011 12:09:21 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:51.011 12:09:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58174 00:05:51.011 12:09:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:51.011 12:09:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:51.272 [ 00:05:51.272 "bdev_malloc_delete", 00:05:51.272 "bdev_malloc_create", 00:05:51.272 "bdev_null_resize", 00:05:51.272 "bdev_null_delete", 00:05:51.272 "bdev_null_create", 00:05:51.272 "bdev_nvme_cuse_unregister", 00:05:51.272 "bdev_nvme_cuse_register", 00:05:51.272 "bdev_opal_new_user", 00:05:51.272 "bdev_opal_set_lock_state", 00:05:51.272 "bdev_opal_delete", 00:05:51.272 "bdev_opal_get_info", 00:05:51.272 "bdev_opal_create", 00:05:51.272 "bdev_nvme_opal_revert", 00:05:51.272 "bdev_nvme_opal_init", 00:05:51.272 "bdev_nvme_send_cmd", 00:05:51.272 "bdev_nvme_set_keys", 00:05:51.272 "bdev_nvme_get_path_iostat", 00:05:51.272 "bdev_nvme_get_mdns_discovery_info", 00:05:51.272 "bdev_nvme_stop_mdns_discovery", 00:05:51.272 "bdev_nvme_start_mdns_discovery", 00:05:51.272 "bdev_nvme_set_multipath_policy", 00:05:51.272 "bdev_nvme_set_preferred_path", 00:05:51.272 "bdev_nvme_get_io_paths", 00:05:51.272 "bdev_nvme_remove_error_injection", 00:05:51.272 "bdev_nvme_add_error_injection", 00:05:51.272 "bdev_nvme_get_discovery_info", 00:05:51.272 "bdev_nvme_stop_discovery", 00:05:51.272 "bdev_nvme_start_discovery", 00:05:51.272 "bdev_nvme_get_controller_health_info", 00:05:51.272 "bdev_nvme_disable_controller", 00:05:51.272 "bdev_nvme_enable_controller", 00:05:51.272 "bdev_nvme_reset_controller", 00:05:51.272 "bdev_nvme_get_transport_statistics", 00:05:51.272 "bdev_nvme_apply_firmware", 00:05:51.272 "bdev_nvme_detach_controller", 00:05:51.272 "bdev_nvme_get_controllers", 00:05:51.272 "bdev_nvme_attach_controller", 00:05:51.272 "bdev_nvme_set_hotplug", 00:05:51.272 "bdev_nvme_set_options", 00:05:51.272 "bdev_passthru_delete", 00:05:51.272 "bdev_passthru_create", 00:05:51.272 "bdev_lvol_set_parent_bdev", 00:05:51.272 "bdev_lvol_set_parent", 00:05:51.272 "bdev_lvol_check_shallow_copy", 00:05:51.272 "bdev_lvol_start_shallow_copy", 00:05:51.272 "bdev_lvol_grow_lvstore", 00:05:51.272 "bdev_lvol_get_lvols", 00:05:51.272 "bdev_lvol_get_lvstores", 00:05:51.272 "bdev_lvol_delete", 00:05:51.272 "bdev_lvol_set_read_only", 00:05:51.272 "bdev_lvol_resize", 00:05:51.272 "bdev_lvol_decouple_parent", 00:05:51.272 "bdev_lvol_inflate", 00:05:51.272 "bdev_lvol_rename", 00:05:51.272 "bdev_lvol_clone_bdev", 00:05:51.272 "bdev_lvol_clone", 00:05:51.272 "bdev_lvol_snapshot", 00:05:51.272 "bdev_lvol_create", 00:05:51.272 "bdev_lvol_delete_lvstore", 00:05:51.272 "bdev_lvol_rename_lvstore", 00:05:51.272 "bdev_lvol_create_lvstore", 00:05:51.272 "bdev_raid_set_options", 00:05:51.272 "bdev_raid_remove_base_bdev", 00:05:51.272 "bdev_raid_add_base_bdev", 00:05:51.272 "bdev_raid_delete", 00:05:51.272 "bdev_raid_create", 00:05:51.272 "bdev_raid_get_bdevs", 00:05:51.272 "bdev_error_inject_error", 00:05:51.272 "bdev_error_delete", 00:05:51.272 "bdev_error_create", 00:05:51.272 "bdev_split_delete", 00:05:51.272 "bdev_split_create", 00:05:51.272 "bdev_delay_delete", 00:05:51.272 "bdev_delay_create", 00:05:51.272 "bdev_delay_update_latency", 00:05:51.272 "bdev_zone_block_delete", 00:05:51.272 "bdev_zone_block_create", 00:05:51.272 "blobfs_create", 00:05:51.272 "blobfs_detect", 00:05:51.272 "blobfs_set_cache_size", 00:05:51.272 "bdev_xnvme_delete", 00:05:51.272 "bdev_xnvme_create", 00:05:51.272 "bdev_aio_delete", 00:05:51.272 "bdev_aio_rescan", 00:05:51.272 "bdev_aio_create", 00:05:51.272 "bdev_ftl_set_property", 00:05:51.272 "bdev_ftl_get_properties", 00:05:51.272 "bdev_ftl_get_stats", 00:05:51.272 "bdev_ftl_unmap", 00:05:51.272 "bdev_ftl_unload", 00:05:51.272 "bdev_ftl_delete", 00:05:51.272 "bdev_ftl_load", 00:05:51.272 "bdev_ftl_create", 00:05:51.272 "bdev_virtio_attach_controller", 00:05:51.272 "bdev_virtio_scsi_get_devices", 00:05:51.272 "bdev_virtio_detach_controller", 00:05:51.272 "bdev_virtio_blk_set_hotplug", 00:05:51.272 "bdev_iscsi_delete", 00:05:51.272 "bdev_iscsi_create", 00:05:51.272 "bdev_iscsi_set_options", 00:05:51.272 "accel_error_inject_error", 00:05:51.272 "ioat_scan_accel_module", 00:05:51.272 "dsa_scan_accel_module", 00:05:51.272 "iaa_scan_accel_module", 00:05:51.272 "keyring_file_remove_key", 00:05:51.272 "keyring_file_add_key", 00:05:51.272 "keyring_linux_set_options", 00:05:51.272 "fsdev_aio_delete", 00:05:51.272 "fsdev_aio_create", 00:05:51.272 "iscsi_get_histogram", 00:05:51.272 "iscsi_enable_histogram", 00:05:51.272 "iscsi_set_options", 00:05:51.272 "iscsi_get_auth_groups", 00:05:51.272 "iscsi_auth_group_remove_secret", 00:05:51.272 "iscsi_auth_group_add_secret", 00:05:51.272 "iscsi_delete_auth_group", 00:05:51.272 "iscsi_create_auth_group", 00:05:51.272 "iscsi_set_discovery_auth", 00:05:51.272 "iscsi_get_options", 00:05:51.272 "iscsi_target_node_request_logout", 00:05:51.272 "iscsi_target_node_set_redirect", 00:05:51.272 "iscsi_target_node_set_auth", 00:05:51.272 "iscsi_target_node_add_lun", 00:05:51.272 "iscsi_get_stats", 00:05:51.272 "iscsi_get_connections", 00:05:51.272 "iscsi_portal_group_set_auth", 00:05:51.272 "iscsi_start_portal_group", 00:05:51.272 "iscsi_delete_portal_group", 00:05:51.272 "iscsi_create_portal_group", 00:05:51.272 "iscsi_get_portal_groups", 00:05:51.272 "iscsi_delete_target_node", 00:05:51.272 "iscsi_target_node_remove_pg_ig_maps", 00:05:51.272 "iscsi_target_node_add_pg_ig_maps", 00:05:51.272 "iscsi_create_target_node", 00:05:51.272 "iscsi_get_target_nodes", 00:05:51.272 "iscsi_delete_initiator_group", 00:05:51.272 "iscsi_initiator_group_remove_initiators", 00:05:51.272 "iscsi_initiator_group_add_initiators", 00:05:51.272 "iscsi_create_initiator_group", 00:05:51.272 "iscsi_get_initiator_groups", 00:05:51.272 "nvmf_set_crdt", 00:05:51.272 "nvmf_set_config", 00:05:51.272 "nvmf_set_max_subsystems", 00:05:51.272 "nvmf_stop_mdns_prr", 00:05:51.272 "nvmf_publish_mdns_prr", 00:05:51.272 "nvmf_subsystem_get_listeners", 00:05:51.272 "nvmf_subsystem_get_qpairs", 00:05:51.272 "nvmf_subsystem_get_controllers", 00:05:51.272 "nvmf_get_stats", 00:05:51.272 "nvmf_get_transports", 00:05:51.272 "nvmf_create_transport", 00:05:51.272 "nvmf_get_targets", 00:05:51.272 "nvmf_delete_target", 00:05:51.272 "nvmf_create_target", 00:05:51.272 "nvmf_subsystem_allow_any_host", 00:05:51.272 "nvmf_subsystem_set_keys", 00:05:51.272 "nvmf_subsystem_remove_host", 00:05:51.272 "nvmf_subsystem_add_host", 00:05:51.272 "nvmf_ns_remove_host", 00:05:51.272 "nvmf_ns_add_host", 00:05:51.272 "nvmf_subsystem_remove_ns", 00:05:51.272 "nvmf_subsystem_set_ns_ana_group", 00:05:51.272 "nvmf_subsystem_add_ns", 00:05:51.272 "nvmf_subsystem_listener_set_ana_state", 00:05:51.272 "nvmf_discovery_get_referrals", 00:05:51.272 "nvmf_discovery_remove_referral", 00:05:51.272 "nvmf_discovery_add_referral", 00:05:51.272 "nvmf_subsystem_remove_listener", 00:05:51.272 "nvmf_subsystem_add_listener", 00:05:51.272 "nvmf_delete_subsystem", 00:05:51.272 "nvmf_create_subsystem", 00:05:51.272 "nvmf_get_subsystems", 00:05:51.272 "env_dpdk_get_mem_stats", 00:05:51.272 "nbd_get_disks", 00:05:51.272 "nbd_stop_disk", 00:05:51.272 "nbd_start_disk", 00:05:51.272 "ublk_recover_disk", 00:05:51.272 "ublk_get_disks", 00:05:51.272 "ublk_stop_disk", 00:05:51.272 "ublk_start_disk", 00:05:51.272 "ublk_destroy_target", 00:05:51.272 "ublk_create_target", 00:05:51.272 "virtio_blk_create_transport", 00:05:51.272 "virtio_blk_get_transports", 00:05:51.272 "vhost_controller_set_coalescing", 00:05:51.272 "vhost_get_controllers", 00:05:51.272 "vhost_delete_controller", 00:05:51.272 "vhost_create_blk_controller", 00:05:51.272 "vhost_scsi_controller_remove_target", 00:05:51.272 "vhost_scsi_controller_add_target", 00:05:51.272 "vhost_start_scsi_controller", 00:05:51.272 "vhost_create_scsi_controller", 00:05:51.272 "thread_set_cpumask", 00:05:51.272 "scheduler_set_options", 00:05:51.272 "framework_get_governor", 00:05:51.272 "framework_get_scheduler", 00:05:51.272 "framework_set_scheduler", 00:05:51.272 "framework_get_reactors", 00:05:51.272 "thread_get_io_channels", 00:05:51.272 "thread_get_pollers", 00:05:51.272 "thread_get_stats", 00:05:51.272 "framework_monitor_context_switch", 00:05:51.272 "spdk_kill_instance", 00:05:51.272 "log_enable_timestamps", 00:05:51.272 "log_get_flags", 00:05:51.272 "log_clear_flag", 00:05:51.272 "log_set_flag", 00:05:51.272 "log_get_level", 00:05:51.272 "log_set_level", 00:05:51.272 "log_get_print_level", 00:05:51.272 "log_set_print_level", 00:05:51.272 "framework_enable_cpumask_locks", 00:05:51.272 "framework_disable_cpumask_locks", 00:05:51.272 "framework_wait_init", 00:05:51.272 "framework_start_init", 00:05:51.272 "scsi_get_devices", 00:05:51.272 "bdev_get_histogram", 00:05:51.272 "bdev_enable_histogram", 00:05:51.272 "bdev_set_qos_limit", 00:05:51.272 "bdev_set_qd_sampling_period", 00:05:51.272 "bdev_get_bdevs", 00:05:51.272 "bdev_reset_iostat", 00:05:51.272 "bdev_get_iostat", 00:05:51.272 "bdev_examine", 00:05:51.272 "bdev_wait_for_examine", 00:05:51.272 "bdev_set_options", 00:05:51.272 "accel_get_stats", 00:05:51.272 "accel_set_options", 00:05:51.272 "accel_set_driver", 00:05:51.272 "accel_crypto_key_destroy", 00:05:51.272 "accel_crypto_keys_get", 00:05:51.272 "accel_crypto_key_create", 00:05:51.272 "accel_assign_opc", 00:05:51.272 "accel_get_module_info", 00:05:51.272 "accel_get_opc_assignments", 00:05:51.272 "vmd_rescan", 00:05:51.272 "vmd_remove_device", 00:05:51.272 "vmd_enable", 00:05:51.272 "sock_get_default_impl", 00:05:51.272 "sock_set_default_impl", 00:05:51.272 "sock_impl_set_options", 00:05:51.272 "sock_impl_get_options", 00:05:51.272 "iobuf_get_stats", 00:05:51.272 "iobuf_set_options", 00:05:51.272 "keyring_get_keys", 00:05:51.272 "framework_get_pci_devices", 00:05:51.272 "framework_get_config", 00:05:51.272 "framework_get_subsystems", 00:05:51.272 "fsdev_set_opts", 00:05:51.272 "fsdev_get_opts", 00:05:51.272 "trace_get_info", 00:05:51.272 "trace_get_tpoint_group_mask", 00:05:51.272 "trace_disable_tpoint_group", 00:05:51.272 "trace_enable_tpoint_group", 00:05:51.272 "trace_clear_tpoint_mask", 00:05:51.272 "trace_set_tpoint_mask", 00:05:51.272 "notify_get_notifications", 00:05:51.272 "notify_get_types", 00:05:51.272 "spdk_get_version", 00:05:51.272 "rpc_get_methods" 00:05:51.272 ] 00:05:51.273 12:09:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:51.273 12:09:22 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:51.273 12:09:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.533 12:09:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:51.533 12:09:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58157 00:05:51.533 12:09:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58157 ']' 00:05:51.533 12:09:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58157 00:05:51.533 12:09:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:51.533 12:09:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.533 12:09:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58157 00:05:51.533 killing process with pid 58157 00:05:51.533 12:09:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.533 12:09:22 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.533 12:09:22 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58157' 00:05:51.533 12:09:22 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58157 00:05:51.533 12:09:22 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58157 00:05:53.463 ************************************ 00:05:53.463 END TEST spdkcli_tcp 00:05:53.463 ************************************ 00:05:53.463 00:05:53.463 real 0m3.115s 00:05:53.463 user 0m5.638s 00:05:53.463 sys 0m0.501s 00:05:53.463 12:09:23 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.463 12:09:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.463 12:09:23 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.463 12:09:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.463 12:09:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.463 12:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:53.463 ************************************ 00:05:53.463 START TEST dpdk_mem_utility 00:05:53.463 ************************************ 00:05:53.463 12:09:23 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.463 * Looking for test storage... 00:05:53.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:53.463 12:09:23 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.463 12:09:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.463 12:09:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.463 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:53.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.463 12:09:24 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:53.463 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.463 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.463 --rc genhtml_branch_coverage=1 00:05:53.463 --rc genhtml_function_coverage=1 00:05:53.463 --rc genhtml_legend=1 00:05:53.463 --rc geninfo_all_blocks=1 00:05:53.463 --rc geninfo_unexecuted_blocks=1 00:05:53.463 00:05:53.463 ' 00:05:53.463 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.463 --rc genhtml_branch_coverage=1 00:05:53.463 --rc genhtml_function_coverage=1 00:05:53.463 --rc genhtml_legend=1 00:05:53.463 --rc geninfo_all_blocks=1 00:05:53.463 --rc geninfo_unexecuted_blocks=1 00:05:53.463 00:05:53.463 ' 00:05:53.463 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.463 --rc genhtml_branch_coverage=1 00:05:53.463 --rc genhtml_function_coverage=1 00:05:53.463 --rc genhtml_legend=1 00:05:53.463 --rc geninfo_all_blocks=1 00:05:53.463 --rc geninfo_unexecuted_blocks=1 00:05:53.463 00:05:53.463 ' 00:05:53.463 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.463 --rc genhtml_branch_coverage=1 00:05:53.463 --rc genhtml_function_coverage=1 00:05:53.463 --rc genhtml_legend=1 00:05:53.463 --rc geninfo_all_blocks=1 00:05:53.463 --rc geninfo_unexecuted_blocks=1 00:05:53.463 00:05:53.463 ' 00:05:53.463 12:09:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:53.463 12:09:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58268 00:05:53.463 12:09:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58268 00:05:53.463 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58268 ']' 00:05:53.463 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.463 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.463 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.463 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.463 12:09:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.463 12:09:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.463 [2024-12-05 12:09:24.097250] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:53.463 [2024-12-05 12:09:24.097551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58268 ] 00:05:53.463 [2024-12-05 12:09:24.254132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.722 [2024-12-05 12:09:24.369349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.292 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.292 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:54.292 12:09:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:54.292 12:09:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:54.292 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.292 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.292 { 00:05:54.292 "filename": "/tmp/spdk_mem_dump.txt" 00:05:54.292 } 00:05:54.292 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.292 12:09:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:54.292 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:54.292 1 heaps totaling size 824.000000 MiB 00:05:54.292 size: 824.000000 MiB heap id: 0 00:05:54.292 end heaps---------- 00:05:54.292 9 mempools totaling size 603.782043 MiB 00:05:54.292 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:54.292 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:54.292 size: 100.555481 MiB name: bdev_io_58268 00:05:54.292 size: 50.003479 MiB name: msgpool_58268 00:05:54.292 size: 36.509338 MiB name: fsdev_io_58268 00:05:54.292 size: 21.763794 MiB name: PDU_Pool 00:05:54.292 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:54.292 size: 4.133484 MiB name: evtpool_58268 00:05:54.292 size: 0.026123 MiB name: Session_Pool 00:05:54.292 end mempools------- 00:05:54.292 6 memzones totaling size 4.142822 MiB 00:05:54.292 size: 1.000366 MiB name: RG_ring_0_58268 00:05:54.292 size: 1.000366 MiB name: RG_ring_1_58268 00:05:54.292 size: 1.000366 MiB name: RG_ring_4_58268 00:05:54.292 size: 1.000366 MiB name: RG_ring_5_58268 00:05:54.292 size: 0.125366 MiB name: RG_ring_2_58268 00:05:54.292 size: 0.015991 MiB name: RG_ring_3_58268 00:05:54.292 end memzones------- 00:05:54.292 12:09:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:54.292 heap id: 0 total size: 824.000000 MiB number of busy elements: 327 number of free elements: 18 00:05:54.292 list of free elements. size: 16.778442 MiB 00:05:54.292 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:54.292 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:54.292 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:54.292 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:54.292 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:54.292 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:54.292 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:54.292 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:54.292 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:54.292 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:54.292 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:54.292 element at address: 0x20001b400000 with size: 0.559021 MiB 00:05:54.292 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:54.292 element at address: 0x200019600000 with size: 0.488220 MiB 00:05:54.292 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:54.292 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:54.292 element at address: 0x200028800000 with size: 0.391174 MiB 00:05:54.292 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:54.292 list of standard malloc elements. size: 199.290649 MiB 00:05:54.292 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:54.292 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:54.292 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:54.292 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:54.292 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:54.292 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:54.292 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:54.292 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:54.292 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:54.292 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:54.292 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:54.292 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:54.292 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48f1c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48f2c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:54.293 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:54.294 element at address: 0x200028864240 with size: 0.000244 MiB 00:05:54.294 element at address: 0x200028864340 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886b000 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:54.294 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:54.294 list of memzone associated elements. size: 607.930908 MiB 00:05:54.294 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:54.294 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:54.294 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:54.294 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:54.294 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:54.294 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58268_0 00:05:54.294 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:54.294 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58268_0 00:05:54.294 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:54.294 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58268_0 00:05:54.294 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:54.294 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:54.294 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:54.294 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:54.294 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:54.294 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58268_0 00:05:54.294 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:54.294 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58268 00:05:54.294 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:54.294 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58268 00:05:54.294 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:54.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:54.294 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:54.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:54.294 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:54.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:54.294 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:54.294 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:54.294 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:54.294 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58268 00:05:54.294 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:54.294 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58268 00:05:54.294 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:54.294 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58268 00:05:54.294 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:54.294 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58268 00:05:54.294 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:54.294 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58268 00:05:54.294 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:54.294 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58268 00:05:54.294 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:54.294 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:54.295 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:54.295 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:54.295 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:54.295 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:54.295 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:54.295 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58268 00:05:54.295 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:54.295 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58268 00:05:54.295 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:54.295 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:54.295 element at address: 0x200028864440 with size: 0.023804 MiB 00:05:54.295 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:54.295 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:54.295 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58268 00:05:54.295 element at address: 0x20002886a5c0 with size: 0.002502 MiB 00:05:54.295 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:54.295 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:54.295 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58268 00:05:54.295 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:54.295 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58268 00:05:54.295 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:54.295 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58268 00:05:54.295 element at address: 0x20002886b100 with size: 0.000366 MiB 00:05:54.295 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:54.295 12:09:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:54.295 12:09:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58268 00:05:54.295 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58268 ']' 00:05:54.295 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58268 00:05:54.295 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:54.295 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.295 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58268 00:05:54.554 killing process with pid 58268 00:05:54.554 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.554 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.554 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58268' 00:05:54.554 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58268 00:05:54.554 12:09:25 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58268 00:05:55.932 ************************************ 00:05:55.932 END TEST dpdk_mem_utility 00:05:55.932 ************************************ 00:05:55.932 00:05:55.932 real 0m2.923s 00:05:55.932 user 0m2.885s 00:05:55.932 sys 0m0.456s 00:05:55.932 12:09:26 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.932 12:09:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:56.192 12:09:26 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:56.192 12:09:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.192 12:09:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.192 12:09:26 -- common/autotest_common.sh@10 -- # set +x 00:05:56.192 ************************************ 00:05:56.192 START TEST event 00:05:56.192 ************************************ 00:05:56.192 12:09:26 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:56.192 * Looking for test storage... 00:05:56.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:56.192 12:09:26 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:56.192 12:09:26 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:56.192 12:09:26 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:56.192 12:09:26 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:56.192 12:09:26 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.192 12:09:26 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.192 12:09:26 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.192 12:09:26 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.192 12:09:26 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.192 12:09:26 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.192 12:09:26 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.192 12:09:26 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.192 12:09:26 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.192 12:09:26 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.192 12:09:26 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.192 12:09:26 event -- scripts/common.sh@344 -- # case "$op" in 00:05:56.192 12:09:26 event -- scripts/common.sh@345 -- # : 1 00:05:56.192 12:09:26 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.192 12:09:26 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.192 12:09:26 event -- scripts/common.sh@365 -- # decimal 1 00:05:56.192 12:09:26 event -- scripts/common.sh@353 -- # local d=1 00:05:56.192 12:09:26 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.192 12:09:26 event -- scripts/common.sh@355 -- # echo 1 00:05:56.192 12:09:26 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.192 12:09:26 event -- scripts/common.sh@366 -- # decimal 2 00:05:56.192 12:09:26 event -- scripts/common.sh@353 -- # local d=2 00:05:56.192 12:09:26 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.192 12:09:26 event -- scripts/common.sh@355 -- # echo 2 00:05:56.192 12:09:26 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.192 12:09:26 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.192 12:09:26 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.192 12:09:26 event -- scripts/common.sh@368 -- # return 0 00:05:56.192 12:09:26 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.192 12:09:26 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:56.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.192 --rc genhtml_branch_coverage=1 00:05:56.192 --rc genhtml_function_coverage=1 00:05:56.192 --rc genhtml_legend=1 00:05:56.192 --rc geninfo_all_blocks=1 00:05:56.192 --rc geninfo_unexecuted_blocks=1 00:05:56.192 00:05:56.192 ' 00:05:56.192 12:09:26 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:56.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.192 --rc genhtml_branch_coverage=1 00:05:56.192 --rc genhtml_function_coverage=1 00:05:56.192 --rc genhtml_legend=1 00:05:56.192 --rc geninfo_all_blocks=1 00:05:56.192 --rc geninfo_unexecuted_blocks=1 00:05:56.192 00:05:56.192 ' 00:05:56.192 12:09:26 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:56.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.192 --rc genhtml_branch_coverage=1 00:05:56.192 --rc genhtml_function_coverage=1 00:05:56.192 --rc genhtml_legend=1 00:05:56.192 --rc geninfo_all_blocks=1 00:05:56.192 --rc geninfo_unexecuted_blocks=1 00:05:56.192 00:05:56.192 ' 00:05:56.192 12:09:26 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:56.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.192 --rc genhtml_branch_coverage=1 00:05:56.192 --rc genhtml_function_coverage=1 00:05:56.192 --rc genhtml_legend=1 00:05:56.192 --rc geninfo_all_blocks=1 00:05:56.192 --rc geninfo_unexecuted_blocks=1 00:05:56.192 00:05:56.192 ' 00:05:56.192 12:09:26 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:56.192 12:09:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:56.192 12:09:26 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:56.192 12:09:26 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:56.192 12:09:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.192 12:09:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.192 ************************************ 00:05:56.192 START TEST event_perf 00:05:56.192 ************************************ 00:05:56.192 12:09:26 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:56.192 Running I/O for 1 seconds...[2024-12-05 12:09:27.001351] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:56.192 [2024-12-05 12:09:27.001936] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58365 ] 00:05:56.451 [2024-12-05 12:09:27.161261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.451 [2024-12-05 12:09:27.283564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.451 [2024-12-05 12:09:27.283657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.451 Running I/O for 1 seconds...[2024-12-05 12:09:27.283919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.451 [2024-12-05 12:09:27.283941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.867 00:05:57.867 lcore 0: 149977 00:05:57.867 lcore 1: 149980 00:05:57.867 lcore 2: 149981 00:05:57.867 lcore 3: 149978 00:05:57.867 done. 00:05:57.867 00:05:57.867 real 0m1.490s 00:05:57.867 user 0m4.287s 00:05:57.867 sys 0m0.081s 00:05:57.867 12:09:28 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.867 12:09:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.867 ************************************ 00:05:57.867 END TEST event_perf 00:05:57.867 ************************************ 00:05:57.867 12:09:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:57.867 12:09:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:57.867 12:09:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.867 12:09:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.867 ************************************ 00:05:57.867 START TEST event_reactor 00:05:57.867 ************************************ 00:05:57.867 12:09:28 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:57.867 [2024-12-05 12:09:28.535556] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:57.867 [2024-12-05 12:09:28.535800] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58410 ] 00:05:57.867 [2024-12-05 12:09:28.693948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.125 [2024-12-05 12:09:28.810454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.499 test_start 00:05:59.499 oneshot 00:05:59.499 tick 100 00:05:59.499 tick 100 00:05:59.499 tick 250 00:05:59.499 tick 100 00:05:59.499 tick 100 00:05:59.499 tick 100 00:05:59.499 tick 250 00:05:59.499 tick 500 00:05:59.499 tick 100 00:05:59.499 tick 100 00:05:59.499 tick 250 00:05:59.499 tick 100 00:05:59.499 tick 100 00:05:59.499 test_end 00:05:59.499 00:05:59.499 real 0m1.466s 00:05:59.499 user 0m1.288s 00:05:59.499 sys 0m0.068s 00:05:59.499 12:09:29 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.499 12:09:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:59.499 ************************************ 00:05:59.499 END TEST event_reactor 00:05:59.499 ************************************ 00:05:59.499 12:09:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.499 12:09:30 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:59.499 12:09:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.499 12:09:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.499 ************************************ 00:05:59.499 START TEST event_reactor_perf 00:05:59.499 ************************************ 00:05:59.499 12:09:30 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.499 [2024-12-05 12:09:30.049002] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:59.499 [2024-12-05 12:09:30.049140] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58441 ] 00:05:59.499 [2024-12-05 12:09:30.209283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.499 [2024-12-05 12:09:30.326932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.896 test_start 00:06:00.896 test_end 00:06:00.896 Performance: 311981 events per second 00:06:00.896 00:06:00.896 real 0m1.476s 00:06:00.896 user 0m1.286s 00:06:00.896 sys 0m0.080s 00:06:00.896 12:09:31 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.896 12:09:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.896 ************************************ 00:06:00.896 END TEST event_reactor_perf 00:06:00.896 ************************************ 00:06:00.896 12:09:31 event -- event/event.sh@49 -- # uname -s 00:06:00.896 12:09:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:00.896 12:09:31 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:00.896 12:09:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.896 12:09:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.896 12:09:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.896 ************************************ 00:06:00.896 START TEST event_scheduler 00:06:00.896 ************************************ 00:06:00.896 12:09:31 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:00.896 * Looking for test storage... 00:06:00.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:00.896 12:09:31 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.896 12:09:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.896 12:09:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.896 12:09:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.896 12:09:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.896 12:09:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.896 12:09:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.896 12:09:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.896 12:09:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.896 12:09:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.896 12:09:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.896 12:09:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.896 12:09:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.896 12:09:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.896 12:09:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.896 12:09:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:00.896 12:09:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:00.896 12:09:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.897 12:09:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.897 12:09:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:00.897 12:09:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:00.897 12:09:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.897 12:09:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:00.897 12:09:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.897 12:09:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:00.897 12:09:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:00.897 12:09:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.897 12:09:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:00.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.897 12:09:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.897 12:09:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.897 12:09:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.897 12:09:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:00.897 12:09:31 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.897 12:09:31 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.897 --rc genhtml_branch_coverage=1 00:06:00.897 --rc genhtml_function_coverage=1 00:06:00.897 --rc genhtml_legend=1 00:06:00.897 --rc geninfo_all_blocks=1 00:06:00.897 --rc geninfo_unexecuted_blocks=1 00:06:00.897 00:06:00.897 ' 00:06:00.897 12:09:31 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.897 --rc genhtml_branch_coverage=1 00:06:00.897 --rc genhtml_function_coverage=1 00:06:00.897 --rc genhtml_legend=1 00:06:00.897 --rc geninfo_all_blocks=1 00:06:00.897 --rc geninfo_unexecuted_blocks=1 00:06:00.897 00:06:00.897 ' 00:06:00.897 12:09:31 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.897 --rc genhtml_branch_coverage=1 00:06:00.897 --rc genhtml_function_coverage=1 00:06:00.897 --rc genhtml_legend=1 00:06:00.897 --rc geninfo_all_blocks=1 00:06:00.897 --rc geninfo_unexecuted_blocks=1 00:06:00.897 00:06:00.897 ' 00:06:00.897 12:09:31 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.897 --rc genhtml_branch_coverage=1 00:06:00.897 --rc genhtml_function_coverage=1 00:06:00.897 --rc genhtml_legend=1 00:06:00.897 --rc geninfo_all_blocks=1 00:06:00.897 --rc geninfo_unexecuted_blocks=1 00:06:00.897 00:06:00.897 ' 00:06:00.897 12:09:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:00.897 12:09:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58517 00:06:00.897 12:09:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.897 12:09:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58517 00:06:00.897 12:09:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:00.897 12:09:31 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58517 ']' 00:06:00.897 12:09:31 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.897 12:09:31 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.897 12:09:31 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.897 12:09:31 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.897 12:09:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.154 [2024-12-05 12:09:31.778943] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:01.154 [2024-12-05 12:09:31.780177] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58517 ] 00:06:01.154 [2024-12-05 12:09:31.946023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.412 [2024-12-05 12:09:32.053623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.412 [2024-12-05 12:09:32.053932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.412 [2024-12-05 12:09:32.054242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.412 [2024-12-05 12:09:32.054326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.979 12:09:32 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.979 12:09:32 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:01.979 12:09:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:01.979 12:09:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.979 12:09:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.979 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:01.979 POWER: Cannot set governor of lcore 0 to userspace 00:06:01.979 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:01.979 POWER: Cannot set governor of lcore 0 to performance 00:06:01.979 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:01.979 POWER: Cannot set governor of lcore 0 to userspace 00:06:01.980 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:01.980 POWER: Cannot set governor of lcore 0 to userspace 00:06:01.980 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:01.980 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:01.980 POWER: Unable to set Power Management Environment for lcore 0 00:06:01.980 [2024-12-05 12:09:32.628359] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:01.980 [2024-12-05 12:09:32.628398] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:01.980 [2024-12-05 12:09:32.628420] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:01.980 [2024-12-05 12:09:32.628440] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:01.980 [2024-12-05 12:09:32.628448] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:01.980 [2024-12-05 12:09:32.628457] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:01.980 12:09:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.980 12:09:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:01.980 12:09:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.980 12:09:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.239 [2024-12-05 12:09:32.856073] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:02.239 12:09:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.239 12:09:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:02.239 12:09:32 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.239 12:09:32 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.239 12:09:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.239 ************************************ 00:06:02.239 START TEST scheduler_create_thread 00:06:02.239 ************************************ 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.239 2 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.239 3 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.239 4 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.239 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.240 5 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.240 6 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.240 7 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.240 8 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.240 9 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.240 10 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.240 12:09:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.805 ************************************ 00:06:02.805 END TEST scheduler_create_thread 00:06:02.805 ************************************ 00:06:02.805 12:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.805 00:06:02.805 real 0m0.591s 00:06:02.805 user 0m0.016s 00:06:02.805 sys 0m0.002s 00:06:02.805 12:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.805 12:09:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.805 12:09:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:02.805 12:09:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58517 00:06:02.806 12:09:33 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58517 ']' 00:06:02.806 12:09:33 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58517 00:06:02.806 12:09:33 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:02.806 12:09:33 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.806 12:09:33 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58517 00:06:02.806 killing process with pid 58517 00:06:02.806 12:09:33 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:02.806 12:09:33 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:02.806 12:09:33 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58517' 00:06:02.806 12:09:33 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58517 00:06:02.806 12:09:33 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58517 00:06:03.371 [2024-12-05 12:09:33.937094] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:03.938 ************************************ 00:06:03.938 END TEST event_scheduler 00:06:03.938 ************************************ 00:06:03.938 00:06:03.938 real 0m3.115s 00:06:03.938 user 0m5.959s 00:06:03.938 sys 0m0.365s 00:06:03.938 12:09:34 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.938 12:09:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.938 12:09:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:03.938 12:09:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:03.938 12:09:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.938 12:09:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.938 12:09:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.938 ************************************ 00:06:03.938 START TEST app_repeat 00:06:03.938 ************************************ 00:06:03.938 12:09:34 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:03.938 12:09:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.938 12:09:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.938 12:09:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:03.938 12:09:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.938 12:09:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:03.938 12:09:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:03.938 12:09:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:03.939 Process app_repeat pid: 58601 00:06:03.939 spdk_app_start Round 0 00:06:03.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.939 12:09:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58601 00:06:03.939 12:09:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.939 12:09:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58601' 00:06:03.939 12:09:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:03.939 12:09:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:03.939 12:09:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58601 /var/tmp/spdk-nbd.sock 00:06:03.939 12:09:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58601 ']' 00:06:03.939 12:09:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.939 12:09:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.939 12:09:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.939 12:09:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.939 12:09:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.939 12:09:34 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:03.939 [2024-12-05 12:09:34.748415] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:03.939 [2024-12-05 12:09:34.748563] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58601 ] 00:06:04.197 [2024-12-05 12:09:34.910500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.197 [2024-12-05 12:09:35.029674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.197 [2024-12-05 12:09:35.029789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.129 12:09:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.129 12:09:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:05.129 12:09:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.129 Malloc0 00:06:05.129 12:09:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.387 Malloc1 00:06:05.387 12:09:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.387 12:09:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.646 /dev/nbd0 00:06:05.646 12:09:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.646 12:09:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.646 12:09:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:05.646 12:09:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.646 12:09:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.646 12:09:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.646 12:09:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:05.646 12:09:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.646 12:09:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.646 12:09:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.646 12:09:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.646 1+0 records in 00:06:05.646 1+0 records out 00:06:05.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768777 s, 5.3 MB/s 00:06:05.646 12:09:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.646 12:09:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.646 12:09:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.646 12:09:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.646 12:09:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.646 12:09:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.646 12:09:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.646 12:09:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.905 /dev/nbd1 00:06:05.905 12:09:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.905 12:09:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.905 12:09:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:05.905 12:09:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.905 12:09:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.905 12:09:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.905 12:09:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:05.905 12:09:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.905 12:09:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.905 12:09:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.905 12:09:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.905 1+0 records in 00:06:05.905 1+0 records out 00:06:05.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036185 s, 11.3 MB/s 00:06:05.905 12:09:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.905 12:09:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.905 12:09:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.905 12:09:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.905 12:09:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.906 12:09:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.906 12:09:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.906 12:09:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.906 12:09:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.906 12:09:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.165 { 00:06:06.165 "nbd_device": "/dev/nbd0", 00:06:06.165 "bdev_name": "Malloc0" 00:06:06.165 }, 00:06:06.165 { 00:06:06.165 "nbd_device": "/dev/nbd1", 00:06:06.165 "bdev_name": "Malloc1" 00:06:06.165 } 00:06:06.165 ]' 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.165 { 00:06:06.165 "nbd_device": "/dev/nbd0", 00:06:06.165 "bdev_name": "Malloc0" 00:06:06.165 }, 00:06:06.165 { 00:06:06.165 "nbd_device": "/dev/nbd1", 00:06:06.165 "bdev_name": "Malloc1" 00:06:06.165 } 00:06:06.165 ]' 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.165 /dev/nbd1' 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.165 /dev/nbd1' 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.165 256+0 records in 00:06:06.165 256+0 records out 00:06:06.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430442 s, 244 MB/s 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.165 256+0 records in 00:06:06.165 256+0 records out 00:06:06.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191732 s, 54.7 MB/s 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.165 256+0 records in 00:06:06.165 256+0 records out 00:06:06.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200632 s, 52.3 MB/s 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.165 12:09:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.425 12:09:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.425 12:09:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.425 12:09:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.425 12:09:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.425 12:09:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.425 12:09:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.425 12:09:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.425 12:09:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.425 12:09:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.425 12:09:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.684 12:09:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.684 12:09:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.684 12:09:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.684 12:09:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.684 12:09:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.684 12:09:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.684 12:09:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.684 12:09:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.684 12:09:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.684 12:09:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.684 12:09:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.942 12:09:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.942 12:09:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.942 12:09:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.942 12:09:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.942 12:09:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.942 12:09:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.942 12:09:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.942 12:09:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.942 12:09:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.942 12:09:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.942 12:09:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.942 12:09:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.942 12:09:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.199 12:09:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.134 [2024-12-05 12:09:38.733346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.134 [2024-12-05 12:09:38.830379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.134 [2024-12-05 12:09:38.830397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.134 [2024-12-05 12:09:38.939172] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.134 [2024-12-05 12:09:38.939254] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.745 spdk_app_start Round 1 00:06:10.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.745 12:09:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.745 12:09:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:10.745 12:09:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58601 /var/tmp/spdk-nbd.sock 00:06:10.745 12:09:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58601 ']' 00:06:10.745 12:09:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.745 12:09:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.745 12:09:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.745 12:09:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.745 12:09:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.745 12:09:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.745 12:09:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:10.745 12:09:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.745 Malloc0 00:06:10.745 12:09:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.003 Malloc1 00:06:11.003 12:09:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.003 12:09:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.261 /dev/nbd0 00:06:11.261 12:09:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.261 12:09:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.261 12:09:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:11.261 12:09:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:11.261 12:09:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.261 12:09:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.261 12:09:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:11.261 12:09:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:11.261 12:09:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.261 12:09:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.261 12:09:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.261 1+0 records in 00:06:11.261 1+0 records out 00:06:11.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000139781 s, 29.3 MB/s 00:06:11.261 12:09:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.261 12:09:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:11.261 12:09:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.261 12:09:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.261 12:09:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:11.261 12:09:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.261 12:09:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.261 12:09:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.261 /dev/nbd1 00:06:11.520 12:09:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.520 12:09:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.520 12:09:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:11.520 12:09:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:11.520 12:09:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.520 12:09:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.520 12:09:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:11.520 12:09:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:11.520 12:09:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.520 12:09:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.520 12:09:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.520 1+0 records in 00:06:11.520 1+0 records out 00:06:11.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030431 s, 13.5 MB/s 00:06:11.520 12:09:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.520 12:09:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:11.520 12:09:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.520 12:09:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.520 12:09:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:11.520 12:09:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.520 12:09:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.520 12:09:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.520 12:09:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.520 12:09:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.520 12:09:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.520 { 00:06:11.520 "nbd_device": "/dev/nbd0", 00:06:11.520 "bdev_name": "Malloc0" 00:06:11.520 }, 00:06:11.520 { 00:06:11.520 "nbd_device": "/dev/nbd1", 00:06:11.520 "bdev_name": "Malloc1" 00:06:11.520 } 00:06:11.520 ]' 00:06:11.520 12:09:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.520 { 00:06:11.520 "nbd_device": "/dev/nbd0", 00:06:11.520 "bdev_name": "Malloc0" 00:06:11.520 }, 00:06:11.520 { 00:06:11.520 "nbd_device": "/dev/nbd1", 00:06:11.520 "bdev_name": "Malloc1" 00:06:11.520 } 00:06:11.520 ]' 00:06:11.520 12:09:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.779 /dev/nbd1' 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.779 /dev/nbd1' 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.779 256+0 records in 00:06:11.779 256+0 records out 00:06:11.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00424764 s, 247 MB/s 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.779 256+0 records in 00:06:11.779 256+0 records out 00:06:11.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01664 s, 63.0 MB/s 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.779 256+0 records in 00:06:11.779 256+0 records out 00:06:11.779 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168909 s, 62.1 MB/s 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.779 12:09:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.038 12:09:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.038 12:09:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.038 12:09:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.038 12:09:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.038 12:09:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.038 12:09:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.038 12:09:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.038 12:09:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.038 12:09:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.038 12:09:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.295 12:09:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.295 12:09:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.295 12:09:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.295 12:09:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.295 12:09:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.295 12:09:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.295 12:09:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.295 12:09:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.295 12:09:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.295 12:09:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.295 12:09:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.295 12:09:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.295 12:09:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.295 12:09:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.295 12:09:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.295 12:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.295 12:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.553 12:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:12.554 12:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.554 12:09:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.554 12:09:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.554 12:09:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.554 12:09:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.554 12:09:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.811 12:09:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:13.376 [2024-12-05 12:09:44.074642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.376 [2024-12-05 12:09:44.173146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.376 [2024-12-05 12:09:44.173267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.633 [2024-12-05 12:09:44.284363] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.633 [2024-12-05 12:09:44.284426] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.157 spdk_app_start Round 2 00:06:16.157 12:09:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:16.157 12:09:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:16.157 12:09:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58601 /var/tmp/spdk-nbd.sock 00:06:16.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.157 12:09:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58601 ']' 00:06:16.157 12:09:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.157 12:09:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.157 12:09:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.157 12:09:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.157 12:09:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.157 12:09:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.157 12:09:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:16.157 12:09:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.157 Malloc0 00:06:16.157 12:09:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.414 Malloc1 00:06:16.415 12:09:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.415 12:09:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.672 /dev/nbd0 00:06:16.672 12:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.672 12:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.672 12:09:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:16.672 12:09:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.672 12:09:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.672 12:09:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.672 12:09:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:16.672 12:09:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.672 12:09:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.672 12:09:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.672 12:09:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.672 1+0 records in 00:06:16.672 1+0 records out 00:06:16.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215024 s, 19.0 MB/s 00:06:16.672 12:09:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.672 12:09:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.672 12:09:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.672 12:09:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.672 12:09:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.672 12:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.672 12:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.672 12:09:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.929 /dev/nbd1 00:06:16.929 12:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.929 12:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.929 12:09:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:16.929 12:09:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.930 12:09:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.930 12:09:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.930 12:09:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:16.930 12:09:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.930 12:09:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.930 12:09:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.930 12:09:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.930 1+0 records in 00:06:16.930 1+0 records out 00:06:16.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026013 s, 15.7 MB/s 00:06:16.930 12:09:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.930 12:09:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.930 12:09:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.930 12:09:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.930 12:09:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.930 12:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.930 12:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.930 12:09:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.930 12:09:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.930 12:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.188 { 00:06:17.188 "nbd_device": "/dev/nbd0", 00:06:17.188 "bdev_name": "Malloc0" 00:06:17.188 }, 00:06:17.188 { 00:06:17.188 "nbd_device": "/dev/nbd1", 00:06:17.188 "bdev_name": "Malloc1" 00:06:17.188 } 00:06:17.188 ]' 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.188 { 00:06:17.188 "nbd_device": "/dev/nbd0", 00:06:17.188 "bdev_name": "Malloc0" 00:06:17.188 }, 00:06:17.188 { 00:06:17.188 "nbd_device": "/dev/nbd1", 00:06:17.188 "bdev_name": "Malloc1" 00:06:17.188 } 00:06:17.188 ]' 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.188 /dev/nbd1' 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.188 /dev/nbd1' 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.188 256+0 records in 00:06:17.188 256+0 records out 00:06:17.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00733141 s, 143 MB/s 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.188 256+0 records in 00:06:17.188 256+0 records out 00:06:17.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015019 s, 69.8 MB/s 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.188 256+0 records in 00:06:17.188 256+0 records out 00:06:17.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021405 s, 49.0 MB/s 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.188 12:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.189 12:09:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.447 12:09:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.447 12:09:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.447 12:09:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.447 12:09:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.447 12:09:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.447 12:09:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.447 12:09:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.447 12:09:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.447 12:09:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.447 12:09:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.706 12:09:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.706 12:09:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.706 12:09:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.706 12:09:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.706 12:09:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.706 12:09:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.706 12:09:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.706 12:09:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.706 12:09:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.706 12:09:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.706 12:09:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.706 12:09:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.706 12:09:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.706 12:09:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.963 12:09:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.963 12:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.964 12:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.964 12:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.964 12:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.964 12:09:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.964 12:09:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.964 12:09:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.964 12:09:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.964 12:09:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.222 12:09:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.788 [2024-12-05 12:09:49.500629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.788 [2024-12-05 12:09:49.590957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.788 [2024-12-05 12:09:49.591168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.045 [2024-12-05 12:09:49.701431] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.045 [2024-12-05 12:09:49.701507] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.571 12:09:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58601 /var/tmp/spdk-nbd.sock 00:06:21.571 12:09:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58601 ']' 00:06:21.571 12:09:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.571 12:09:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.571 12:09:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.571 12:09:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.571 12:09:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.571 12:09:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.571 12:09:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:21.571 12:09:52 event.app_repeat -- event/event.sh@39 -- # killprocess 58601 00:06:21.571 12:09:52 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58601 ']' 00:06:21.571 12:09:52 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58601 00:06:21.571 12:09:52 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:21.571 12:09:52 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.571 12:09:52 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58601 00:06:21.571 killing process with pid 58601 00:06:21.571 12:09:52 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.571 12:09:52 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.571 12:09:52 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58601' 00:06:21.571 12:09:52 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58601 00:06:21.571 12:09:52 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58601 00:06:21.830 spdk_app_start is called in Round 0. 00:06:21.830 Shutdown signal received, stop current app iteration 00:06:21.830 Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 reinitialization... 00:06:21.830 spdk_app_start is called in Round 1. 00:06:21.830 Shutdown signal received, stop current app iteration 00:06:21.830 Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 reinitialization... 00:06:21.830 spdk_app_start is called in Round 2. 00:06:21.830 Shutdown signal received, stop current app iteration 00:06:21.830 Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 reinitialization... 00:06:21.830 spdk_app_start is called in Round 3. 00:06:21.830 Shutdown signal received, stop current app iteration 00:06:21.830 12:09:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:21.830 12:09:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:21.830 00:06:21.830 real 0m17.980s 00:06:21.830 user 0m39.199s 00:06:21.830 sys 0m2.275s 00:06:21.830 12:09:52 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.830 ************************************ 00:06:21.830 END TEST app_repeat 00:06:21.830 ************************************ 00:06:21.830 12:09:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.088 12:09:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:22.088 12:09:52 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:22.088 12:09:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.088 12:09:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.088 12:09:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.088 ************************************ 00:06:22.088 START TEST cpu_locks 00:06:22.088 ************************************ 00:06:22.088 12:09:52 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:22.088 * Looking for test storage... 00:06:22.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:22.088 12:09:52 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.088 12:09:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.088 12:09:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.088 12:09:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.088 12:09:52 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:22.088 12:09:52 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.088 12:09:52 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.088 --rc genhtml_branch_coverage=1 00:06:22.088 --rc genhtml_function_coverage=1 00:06:22.088 --rc genhtml_legend=1 00:06:22.088 --rc geninfo_all_blocks=1 00:06:22.088 --rc geninfo_unexecuted_blocks=1 00:06:22.088 00:06:22.088 ' 00:06:22.088 12:09:52 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.088 --rc genhtml_branch_coverage=1 00:06:22.088 --rc genhtml_function_coverage=1 00:06:22.088 --rc genhtml_legend=1 00:06:22.088 --rc geninfo_all_blocks=1 00:06:22.088 --rc geninfo_unexecuted_blocks=1 00:06:22.088 00:06:22.088 ' 00:06:22.088 12:09:52 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.088 --rc genhtml_branch_coverage=1 00:06:22.088 --rc genhtml_function_coverage=1 00:06:22.088 --rc genhtml_legend=1 00:06:22.088 --rc geninfo_all_blocks=1 00:06:22.088 --rc geninfo_unexecuted_blocks=1 00:06:22.088 00:06:22.088 ' 00:06:22.088 12:09:52 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.088 --rc genhtml_branch_coverage=1 00:06:22.088 --rc genhtml_function_coverage=1 00:06:22.088 --rc genhtml_legend=1 00:06:22.088 --rc geninfo_all_blocks=1 00:06:22.088 --rc geninfo_unexecuted_blocks=1 00:06:22.088 00:06:22.088 ' 00:06:22.088 12:09:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:22.088 12:09:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:22.088 12:09:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:22.088 12:09:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:22.088 12:09:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.088 12:09:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.088 12:09:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.088 ************************************ 00:06:22.088 START TEST default_locks 00:06:22.089 ************************************ 00:06:22.089 12:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:22.089 12:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59026 00:06:22.089 12:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59026 00:06:22.089 12:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59026 ']' 00:06:22.089 12:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.089 12:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.089 12:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.089 12:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.089 12:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.089 12:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.346 [2024-12-05 12:09:52.964094] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:22.346 [2024-12-05 12:09:52.964240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59026 ] 00:06:22.346 [2024-12-05 12:09:53.121429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.604 [2024-12-05 12:09:53.220813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59026 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59026 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59026 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59026 ']' 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59026 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59026 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.203 killing process with pid 59026 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59026' 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59026 00:06:23.203 12:09:53 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59026 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59026 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59026 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59026 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59026 ']' 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.597 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59026) - No such process 00:06:24.597 ERROR: process (pid: 59026) is no longer running 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:24.597 00:06:24.597 real 0m2.353s 00:06:24.597 user 0m2.316s 00:06:24.597 sys 0m0.463s 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.597 12:09:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.597 ************************************ 00:06:24.597 END TEST default_locks 00:06:24.597 ************************************ 00:06:24.598 12:09:55 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:24.598 12:09:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.598 12:09:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.598 12:09:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.598 ************************************ 00:06:24.598 START TEST default_locks_via_rpc 00:06:24.598 ************************************ 00:06:24.598 12:09:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:24.598 12:09:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59090 00:06:24.598 12:09:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.598 12:09:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59090 00:06:24.598 12:09:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59090 ']' 00:06:24.598 12:09:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.598 12:09:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.598 12:09:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.598 12:09:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.598 12:09:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.598 [2024-12-05 12:09:55.353882] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:24.598 [2024-12-05 12:09:55.354442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59090 ] 00:06:24.854 [2024-12-05 12:09:55.511484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.854 [2024-12-05 12:09:55.614800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59090 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.418 12:09:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59090 00:06:25.675 12:09:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59090 00:06:25.675 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59090 ']' 00:06:25.675 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59090 00:06:25.675 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:25.675 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.675 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59090 00:06:25.675 killing process with pid 59090 00:06:25.675 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.675 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.675 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59090' 00:06:25.675 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59090 00:06:25.675 12:09:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59090 00:06:27.049 ************************************ 00:06:27.049 END TEST default_locks_via_rpc 00:06:27.049 ************************************ 00:06:27.049 00:06:27.049 real 0m2.420s 00:06:27.049 user 0m2.377s 00:06:27.049 sys 0m0.490s 00:06:27.049 12:09:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.049 12:09:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.050 12:09:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:27.050 12:09:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.050 12:09:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.050 12:09:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.050 ************************************ 00:06:27.050 START TEST non_locking_app_on_locked_coremask 00:06:27.050 ************************************ 00:06:27.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.050 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:27.050 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59142 00:06:27.050 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59142 /var/tmp/spdk.sock 00:06:27.050 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.050 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59142 ']' 00:06:27.050 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.050 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.050 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.050 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.050 12:09:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.050 [2024-12-05 12:09:57.823353] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:27.050 [2024-12-05 12:09:57.823456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59142 ] 00:06:27.309 [2024-12-05 12:09:57.976019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.309 [2024-12-05 12:09:58.072622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.875 12:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.875 12:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:27.875 12:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59158 00:06:27.875 12:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59158 /var/tmp/spdk2.sock 00:06:27.875 12:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59158 ']' 00:06:27.875 12:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.875 12:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:27.875 12:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.875 12:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.875 12:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.875 12:09:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.134 [2024-12-05 12:09:58.744685] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:28.134 [2024-12-05 12:09:58.744985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59158 ] 00:06:28.134 [2024-12-05 12:09:58.911799] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.134 [2024-12-05 12:09:58.911852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.392 [2024-12-05 12:09:59.104648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.325 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.325 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.325 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59142 00:06:29.325 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59142 00:06:29.325 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.891 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59142 00:06:29.891 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59142 ']' 00:06:29.891 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59142 00:06:29.891 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:29.891 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.891 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59142 00:06:29.891 killing process with pid 59142 00:06:29.891 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.891 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.891 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59142' 00:06:29.891 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59142 00:06:29.891 12:10:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59142 00:06:32.418 12:10:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59158 00:06:32.418 12:10:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59158 ']' 00:06:32.418 12:10:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59158 00:06:32.418 12:10:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:32.418 12:10:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.418 12:10:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59158 00:06:32.418 killing process with pid 59158 00:06:32.418 12:10:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.418 12:10:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.418 12:10:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59158' 00:06:32.418 12:10:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59158 00:06:32.418 12:10:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59158 00:06:33.786 ************************************ 00:06:33.786 END TEST non_locking_app_on_locked_coremask 00:06:33.786 ************************************ 00:06:33.786 00:06:33.786 real 0m6.700s 00:06:33.786 user 0m6.846s 00:06:33.786 sys 0m0.952s 00:06:33.786 12:10:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.786 12:10:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.786 12:10:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:33.786 12:10:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.786 12:10:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.786 12:10:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.786 ************************************ 00:06:33.786 START TEST locking_app_on_unlocked_coremask 00:06:33.786 ************************************ 00:06:33.786 12:10:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:33.786 12:10:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59260 00:06:33.786 12:10:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59260 /var/tmp/spdk.sock 00:06:33.786 12:10:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:33.786 12:10:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59260 ']' 00:06:33.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.786 12:10:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.786 12:10:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.786 12:10:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.786 12:10:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.786 12:10:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.786 [2024-12-05 12:10:04.580485] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:33.786 [2024-12-05 12:10:04.580785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59260 ] 00:06:34.043 [2024-12-05 12:10:04.738269] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.043 [2024-12-05 12:10:04.738482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.043 [2024-12-05 12:10:04.840647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.607 12:10:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.607 12:10:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:34.607 12:10:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59271 00:06:34.607 12:10:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59271 /var/tmp/spdk2.sock 00:06:34.607 12:10:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:34.607 12:10:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59271 ']' 00:06:34.607 12:10:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.607 12:10:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.607 12:10:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.607 12:10:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.607 12:10:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.863 [2024-12-05 12:10:05.487199] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:34.863 [2024-12-05 12:10:05.487445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59271 ] 00:06:34.863 [2024-12-05 12:10:05.652994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.121 [2024-12-05 12:10:05.859068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.492 12:10:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.493 12:10:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.493 12:10:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59271 00:06:36.493 12:10:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59271 00:06:36.493 12:10:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.493 12:10:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59260 00:06:36.493 12:10:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59260 ']' 00:06:36.493 12:10:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59260 00:06:36.493 12:10:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:36.493 12:10:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.493 12:10:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59260 00:06:36.493 killing process with pid 59260 00:06:36.493 12:10:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.493 12:10:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.493 12:10:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59260' 00:06:36.493 12:10:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59260 00:06:36.493 12:10:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59260 00:06:39.796 12:10:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59271 00:06:39.796 12:10:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59271 ']' 00:06:39.796 12:10:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59271 00:06:39.796 12:10:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.796 12:10:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.796 12:10:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59271 00:06:39.796 killing process with pid 59271 00:06:39.796 12:10:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.796 12:10:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.796 12:10:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59271' 00:06:39.796 12:10:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59271 00:06:39.796 12:10:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59271 00:06:40.729 00:06:40.729 real 0m6.851s 00:06:40.729 user 0m7.061s 00:06:40.729 sys 0m0.965s 00:06:40.729 12:10:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.729 ************************************ 00:06:40.729 END TEST locking_app_on_unlocked_coremask 00:06:40.729 ************************************ 00:06:40.729 12:10:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.729 12:10:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:40.729 12:10:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.729 12:10:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.729 12:10:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.729 ************************************ 00:06:40.729 START TEST locking_app_on_locked_coremask 00:06:40.729 ************************************ 00:06:40.729 12:10:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:40.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.729 12:10:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59373 00:06:40.729 12:10:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59373 /var/tmp/spdk.sock 00:06:40.729 12:10:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59373 ']' 00:06:40.729 12:10:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.729 12:10:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.729 12:10:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.729 12:10:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.729 12:10:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.729 12:10:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.729 [2024-12-05 12:10:11.475386] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:40.729 [2024-12-05 12:10:11.475558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59373 ] 00:06:40.987 [2024-12-05 12:10:11.637276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.987 [2024-12-05 12:10:11.756855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59389 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59389 /var/tmp/spdk2.sock 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59389 /var/tmp/spdk2.sock 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59389 /var/tmp/spdk2.sock 00:06:41.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59389 ']' 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.921 12:10:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.921 [2024-12-05 12:10:12.513803] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:41.921 [2024-12-05 12:10:12.514761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59389 ] 00:06:41.921 [2024-12-05 12:10:12.704458] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59373 has claimed it. 00:06:41.921 [2024-12-05 12:10:12.708584] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.487 ERROR: process (pid: 59389) is no longer running 00:06:42.487 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59389) - No such process 00:06:42.487 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.487 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:42.487 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:42.487 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.487 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:42.487 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.487 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59373 00:06:42.487 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.487 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59373 00:06:42.745 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59373 00:06:42.745 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59373 ']' 00:06:42.745 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59373 00:06:42.745 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.745 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.745 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59373 00:06:42.745 killing process with pid 59373 00:06:42.745 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.745 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.745 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59373' 00:06:42.745 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59373 00:06:42.745 12:10:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59373 00:06:44.646 ************************************ 00:06:44.646 END TEST locking_app_on_locked_coremask 00:06:44.646 ************************************ 00:06:44.646 00:06:44.646 real 0m3.674s 00:06:44.646 user 0m3.835s 00:06:44.646 sys 0m0.645s 00:06:44.646 12:10:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.646 12:10:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.646 12:10:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:44.646 12:10:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.646 12:10:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.646 12:10:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.646 ************************************ 00:06:44.646 START TEST locking_overlapped_coremask 00:06:44.646 ************************************ 00:06:44.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.646 12:10:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:44.646 12:10:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59447 00:06:44.646 12:10:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59447 /var/tmp/spdk.sock 00:06:44.646 12:10:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:44.646 12:10:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59447 ']' 00:06:44.646 12:10:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.646 12:10:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.646 12:10:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.646 12:10:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.646 12:10:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.646 [2024-12-05 12:10:15.191758] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:44.646 [2024-12-05 12:10:15.192091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59447 ] 00:06:44.646 [2024-12-05 12:10:15.355495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.646 [2024-12-05 12:10:15.479063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.646 [2024-12-05 12:10:15.479165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.646 [2024-12-05 12:10:15.479352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59465 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59465 /var/tmp/spdk2.sock 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59465 /var/tmp/spdk2.sock 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:45.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59465 /var/tmp/spdk2.sock 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59465 ']' 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.578 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.578 [2024-12-05 12:10:16.245167] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:45.578 [2024-12-05 12:10:16.245317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59465 ] 00:06:45.578 [2024-12-05 12:10:16.422907] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59447 has claimed it. 00:06:45.578 [2024-12-05 12:10:16.422971] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.145 ERROR: process (pid: 59465) is no longer running 00:06:46.145 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59465) - No such process 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59447 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59447 ']' 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59447 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59447 00:06:46.145 killing process with pid 59447 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59447' 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59447 00:06:46.145 12:10:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59447 00:06:47.537 00:06:47.537 real 0m3.185s 00:06:47.537 user 0m8.589s 00:06:47.537 sys 0m0.541s 00:06:47.537 12:10:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.537 ************************************ 00:06:47.537 END TEST locking_overlapped_coremask 00:06:47.537 ************************************ 00:06:47.537 12:10:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.537 12:10:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:47.537 12:10:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.537 12:10:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.537 12:10:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.537 ************************************ 00:06:47.537 START TEST locking_overlapped_coremask_via_rpc 00:06:47.537 ************************************ 00:06:47.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.537 12:10:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:47.537 12:10:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59518 00:06:47.537 12:10:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59518 /var/tmp/spdk.sock 00:06:47.537 12:10:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59518 ']' 00:06:47.537 12:10:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.537 12:10:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.537 12:10:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.537 12:10:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.537 12:10:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.537 12:10:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:47.795 [2024-12-05 12:10:18.411901] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:47.795 [2024-12-05 12:10:18.412022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59518 ] 00:06:47.795 [2024-12-05 12:10:18.565638] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.795 [2024-12-05 12:10:18.565833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.054 [2024-12-05 12:10:18.671135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.054 [2024-12-05 12:10:18.671408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.054 [2024-12-05 12:10:18.671458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.621 12:10:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.621 12:10:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.621 12:10:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59536 00:06:48.621 12:10:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:48.621 12:10:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59536 /var/tmp/spdk2.sock 00:06:48.621 12:10:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59536 ']' 00:06:48.621 12:10:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.621 12:10:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.621 12:10:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.621 12:10:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.621 12:10:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.621 [2024-12-05 12:10:19.364727] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:48.621 [2024-12-05 12:10:19.365692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59536 ] 00:06:48.880 [2024-12-05 12:10:19.550523] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.880 [2024-12-05 12:10:19.550577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.880 [2024-12-05 12:10:19.725488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.880 [2024-12-05 12:10:19.725597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.880 [2024-12-05 12:10:19.725620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.253 [2024-12-05 12:10:20.719650] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59518 has claimed it. 00:06:50.253 request: 00:06:50.253 { 00:06:50.253 "method": "framework_enable_cpumask_locks", 00:06:50.253 "req_id": 1 00:06:50.253 } 00:06:50.253 Got JSON-RPC error response 00:06:50.253 response: 00:06:50.253 { 00:06:50.253 "code": -32603, 00:06:50.253 "message": "Failed to claim CPU core: 2" 00:06:50.253 } 00:06:50.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59518 /var/tmp/spdk.sock 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59518 ']' 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59536 /var/tmp/spdk2.sock 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59536 ']' 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.253 12:10:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.510 ************************************ 00:06:50.510 END TEST locking_overlapped_coremask_via_rpc 00:06:50.510 ************************************ 00:06:50.510 12:10:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.510 12:10:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.510 12:10:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:50.510 12:10:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.510 12:10:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.510 12:10:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.510 00:06:50.510 real 0m2.815s 00:06:50.510 user 0m1.090s 00:06:50.510 sys 0m0.140s 00:06:50.510 12:10:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.510 12:10:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.510 12:10:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:50.510 12:10:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59518 ]] 00:06:50.510 12:10:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59518 00:06:50.510 12:10:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59518 ']' 00:06:50.510 12:10:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59518 00:06:50.510 12:10:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:50.510 12:10:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.510 12:10:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59518 00:06:50.510 killing process with pid 59518 00:06:50.510 12:10:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.510 12:10:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.510 12:10:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59518' 00:06:50.510 12:10:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59518 00:06:50.510 12:10:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59518 00:06:52.003 12:10:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59536 ]] 00:06:52.003 12:10:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59536 00:06:52.003 12:10:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59536 ']' 00:06:52.003 12:10:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59536 00:06:52.003 12:10:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:52.003 12:10:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.003 12:10:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59536 00:06:52.003 killing process with pid 59536 00:06:52.003 12:10:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:52.003 12:10:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:52.003 12:10:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59536' 00:06:52.003 12:10:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59536 00:06:52.003 12:10:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59536 00:06:53.380 12:10:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.380 Process with pid 59518 is not found 00:06:53.380 Process with pid 59536 is not found 00:06:53.380 12:10:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:53.380 12:10:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59518 ]] 00:06:53.380 12:10:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59518 00:06:53.380 12:10:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59518 ']' 00:06:53.380 12:10:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59518 00:06:53.380 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59518) - No such process 00:06:53.380 12:10:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59518 is not found' 00:06:53.380 12:10:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59536 ]] 00:06:53.380 12:10:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59536 00:06:53.380 12:10:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59536 ']' 00:06:53.380 12:10:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59536 00:06:53.380 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59536) - No such process 00:06:53.380 12:10:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59536 is not found' 00:06:53.380 12:10:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.380 00:06:53.380 real 0m31.119s 00:06:53.380 user 0m52.641s 00:06:53.380 sys 0m5.059s 00:06:53.380 12:10:23 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.380 ************************************ 00:06:53.380 END TEST cpu_locks 00:06:53.380 ************************************ 00:06:53.380 12:10:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.380 ************************************ 00:06:53.380 END TEST event 00:06:53.380 ************************************ 00:06:53.380 00:06:53.380 real 0m57.081s 00:06:53.380 user 1m44.846s 00:06:53.380 sys 0m8.159s 00:06:53.380 12:10:23 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.380 12:10:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.380 12:10:23 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:53.380 12:10:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.380 12:10:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.380 12:10:23 -- common/autotest_common.sh@10 -- # set +x 00:06:53.380 ************************************ 00:06:53.381 START TEST thread 00:06:53.381 ************************************ 00:06:53.381 12:10:23 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:53.381 * Looking for test storage... 00:06:53.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:53.381 12:10:24 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:53.381 12:10:24 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:53.381 12:10:24 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:53.381 12:10:24 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:53.381 12:10:24 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.381 12:10:24 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.381 12:10:24 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.381 12:10:24 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.381 12:10:24 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.381 12:10:24 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.381 12:10:24 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.381 12:10:24 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.381 12:10:24 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.381 12:10:24 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.381 12:10:24 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.381 12:10:24 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:53.381 12:10:24 thread -- scripts/common.sh@345 -- # : 1 00:06:53.381 12:10:24 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.381 12:10:24 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.381 12:10:24 thread -- scripts/common.sh@365 -- # decimal 1 00:06:53.381 12:10:24 thread -- scripts/common.sh@353 -- # local d=1 00:06:53.381 12:10:24 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.381 12:10:24 thread -- scripts/common.sh@355 -- # echo 1 00:06:53.381 12:10:24 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.381 12:10:24 thread -- scripts/common.sh@366 -- # decimal 2 00:06:53.381 12:10:24 thread -- scripts/common.sh@353 -- # local d=2 00:06:53.381 12:10:24 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.381 12:10:24 thread -- scripts/common.sh@355 -- # echo 2 00:06:53.381 12:10:24 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.381 12:10:24 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.381 12:10:24 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.381 12:10:24 thread -- scripts/common.sh@368 -- # return 0 00:06:53.381 12:10:24 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.381 12:10:24 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.381 --rc genhtml_branch_coverage=1 00:06:53.381 --rc genhtml_function_coverage=1 00:06:53.381 --rc genhtml_legend=1 00:06:53.381 --rc geninfo_all_blocks=1 00:06:53.381 --rc geninfo_unexecuted_blocks=1 00:06:53.381 00:06:53.381 ' 00:06:53.381 12:10:24 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.381 --rc genhtml_branch_coverage=1 00:06:53.381 --rc genhtml_function_coverage=1 00:06:53.381 --rc genhtml_legend=1 00:06:53.381 --rc geninfo_all_blocks=1 00:06:53.381 --rc geninfo_unexecuted_blocks=1 00:06:53.381 00:06:53.381 ' 00:06:53.381 12:10:24 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.381 --rc genhtml_branch_coverage=1 00:06:53.381 --rc genhtml_function_coverage=1 00:06:53.381 --rc genhtml_legend=1 00:06:53.381 --rc geninfo_all_blocks=1 00:06:53.381 --rc geninfo_unexecuted_blocks=1 00:06:53.381 00:06:53.381 ' 00:06:53.381 12:10:24 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.381 --rc genhtml_branch_coverage=1 00:06:53.381 --rc genhtml_function_coverage=1 00:06:53.381 --rc genhtml_legend=1 00:06:53.381 --rc geninfo_all_blocks=1 00:06:53.381 --rc geninfo_unexecuted_blocks=1 00:06:53.381 00:06:53.381 ' 00:06:53.381 12:10:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.381 12:10:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:53.381 12:10:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.381 12:10:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.381 ************************************ 00:06:53.381 START TEST thread_poller_perf 00:06:53.381 ************************************ 00:06:53.381 12:10:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.381 [2024-12-05 12:10:24.184152] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:53.381 [2024-12-05 12:10:24.184410] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59695 ] 00:06:53.643 [2024-12-05 12:10:24.347329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.643 [2024-12-05 12:10:24.465655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.643 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:55.030 [2024-12-05T12:10:25.899Z] ====================================== 00:06:55.030 [2024-12-05T12:10:25.899Z] busy:2611653822 (cyc) 00:06:55.030 [2024-12-05T12:10:25.899Z] total_run_count: 306000 00:06:55.030 [2024-12-05T12:10:25.899Z] tsc_hz: 2600000000 (cyc) 00:06:55.030 [2024-12-05T12:10:25.899Z] ====================================== 00:06:55.030 [2024-12-05T12:10:25.899Z] poller_cost: 8534 (cyc), 3282 (nsec) 00:06:55.030 00:06:55.030 ************************************ 00:06:55.030 END TEST thread_poller_perf 00:06:55.030 ************************************ 00:06:55.030 real 0m1.485s 00:06:55.030 user 0m1.305s 00:06:55.030 sys 0m0.072s 00:06:55.030 12:10:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.030 12:10:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.030 12:10:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.030 12:10:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:55.030 12:10:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.030 12:10:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.030 ************************************ 00:06:55.030 START TEST thread_poller_perf 00:06:55.030 ************************************ 00:06:55.030 12:10:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.030 [2024-12-05 12:10:25.726778] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:55.030 [2024-12-05 12:10:25.727007] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59733 ] 00:06:55.030 [2024-12-05 12:10:25.889035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.290 [2024-12-05 12:10:25.998667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.290 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.754 [2024-12-05T12:10:27.623Z] ====================================== 00:06:56.754 [2024-12-05T12:10:27.623Z] busy:2603652802 (cyc) 00:06:56.754 [2024-12-05T12:10:27.623Z] total_run_count: 3970000 00:06:56.754 [2024-12-05T12:10:27.623Z] tsc_hz: 2600000000 (cyc) 00:06:56.754 [2024-12-05T12:10:27.623Z] ====================================== 00:06:56.754 [2024-12-05T12:10:27.623Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:56.754 00:06:56.754 real 0m1.476s 00:06:56.754 user 0m1.293s 00:06:56.754 sys 0m0.073s 00:06:56.754 12:10:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.754 ************************************ 00:06:56.754 END TEST thread_poller_perf 00:06:56.754 12:10:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.754 ************************************ 00:06:56.754 12:10:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:56.754 00:06:56.754 real 0m3.265s 00:06:56.754 user 0m2.755s 00:06:56.754 sys 0m0.264s 00:06:56.754 12:10:27 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.754 ************************************ 00:06:56.754 END TEST thread 00:06:56.754 12:10:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.754 ************************************ 00:06:56.754 12:10:27 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:56.754 12:10:27 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:56.754 12:10:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.754 12:10:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.754 12:10:27 -- common/autotest_common.sh@10 -- # set +x 00:06:56.754 ************************************ 00:06:56.754 START TEST app_cmdline 00:06:56.754 ************************************ 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:56.754 * Looking for test storage... 00:06:56.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:56.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.754 12:10:27 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.754 --rc genhtml_branch_coverage=1 00:06:56.754 --rc genhtml_function_coverage=1 00:06:56.754 --rc genhtml_legend=1 00:06:56.754 --rc geninfo_all_blocks=1 00:06:56.754 --rc geninfo_unexecuted_blocks=1 00:06:56.754 00:06:56.754 ' 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.754 --rc genhtml_branch_coverage=1 00:06:56.754 --rc genhtml_function_coverage=1 00:06:56.754 --rc genhtml_legend=1 00:06:56.754 --rc geninfo_all_blocks=1 00:06:56.754 --rc geninfo_unexecuted_blocks=1 00:06:56.754 00:06:56.754 ' 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.754 --rc genhtml_branch_coverage=1 00:06:56.754 --rc genhtml_function_coverage=1 00:06:56.754 --rc genhtml_legend=1 00:06:56.754 --rc geninfo_all_blocks=1 00:06:56.754 --rc geninfo_unexecuted_blocks=1 00:06:56.754 00:06:56.754 ' 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.754 --rc genhtml_branch_coverage=1 00:06:56.754 --rc genhtml_function_coverage=1 00:06:56.754 --rc genhtml_legend=1 00:06:56.754 --rc geninfo_all_blocks=1 00:06:56.754 --rc geninfo_unexecuted_blocks=1 00:06:56.754 00:06:56.754 ' 00:06:56.754 12:10:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:56.754 12:10:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59816 00:06:56.754 12:10:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59816 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59816 ']' 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.754 12:10:27 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.754 12:10:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.754 [2024-12-05 12:10:27.467125] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:56.754 [2024-12-05 12:10:27.467877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59816 ] 00:06:57.011 [2024-12-05 12:10:27.625892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.011 [2024-12-05 12:10:27.744275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.579 12:10:28 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.579 12:10:28 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:57.579 12:10:28 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:57.839 { 00:06:57.839 "version": "SPDK v25.01-pre git sha1 85bc1e85a", 00:06:57.839 "fields": { 00:06:57.839 "major": 25, 00:06:57.839 "minor": 1, 00:06:57.839 "patch": 0, 00:06:57.839 "suffix": "-pre", 00:06:57.839 "commit": "85bc1e85a" 00:06:57.839 } 00:06:57.839 } 00:06:57.839 12:10:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:57.839 12:10:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:57.839 12:10:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:57.839 12:10:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:57.839 12:10:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:57.839 12:10:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:57.839 12:10:28 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.839 12:10:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.839 12:10:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:57.839 12:10:28 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.839 12:10:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:57.839 12:10:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:57.839 12:10:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.839 12:10:28 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:57.839 12:10:28 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.839 12:10:28 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.839 12:10:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.839 12:10:28 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.839 12:10:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.839 12:10:28 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.839 12:10:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.839 12:10:28 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.839 12:10:28 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:57.839 12:10:28 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.098 request: 00:06:58.098 { 00:06:58.098 "method": "env_dpdk_get_mem_stats", 00:06:58.098 "req_id": 1 00:06:58.098 } 00:06:58.098 Got JSON-RPC error response 00:06:58.098 response: 00:06:58.098 { 00:06:58.098 "code": -32601, 00:06:58.098 "message": "Method not found" 00:06:58.098 } 00:06:58.098 12:10:28 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:58.098 12:10:28 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:58.098 12:10:28 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:58.098 12:10:28 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:58.098 12:10:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59816 00:06:58.098 12:10:28 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59816 ']' 00:06:58.098 12:10:28 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59816 00:06:58.098 12:10:28 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:58.098 12:10:28 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.098 12:10:28 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59816 00:06:58.098 killing process with pid 59816 00:06:58.098 12:10:28 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.098 12:10:28 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.098 12:10:28 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59816' 00:06:58.098 12:10:28 app_cmdline -- common/autotest_common.sh@973 -- # kill 59816 00:06:58.098 12:10:28 app_cmdline -- common/autotest_common.sh@978 -- # wait 59816 00:07:00.009 ************************************ 00:07:00.009 END TEST app_cmdline 00:07:00.009 ************************************ 00:07:00.009 00:07:00.009 real 0m3.186s 00:07:00.009 user 0m3.410s 00:07:00.009 sys 0m0.474s 00:07:00.009 12:10:30 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.009 12:10:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.009 12:10:30 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:00.009 12:10:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.009 12:10:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.009 12:10:30 -- common/autotest_common.sh@10 -- # set +x 00:07:00.009 ************************************ 00:07:00.009 START TEST version 00:07:00.009 ************************************ 00:07:00.009 12:10:30 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:00.009 * Looking for test storage... 00:07:00.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:00.009 12:10:30 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.009 12:10:30 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.009 12:10:30 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.009 12:10:30 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.009 12:10:30 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.009 12:10:30 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.009 12:10:30 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.009 12:10:30 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.009 12:10:30 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.009 12:10:30 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.009 12:10:30 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.009 12:10:30 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.009 12:10:30 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.009 12:10:30 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.009 12:10:30 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.009 12:10:30 version -- scripts/common.sh@344 -- # case "$op" in 00:07:00.009 12:10:30 version -- scripts/common.sh@345 -- # : 1 00:07:00.009 12:10:30 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.009 12:10:30 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.009 12:10:30 version -- scripts/common.sh@365 -- # decimal 1 00:07:00.009 12:10:30 version -- scripts/common.sh@353 -- # local d=1 00:07:00.009 12:10:30 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.009 12:10:30 version -- scripts/common.sh@355 -- # echo 1 00:07:00.009 12:10:30 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.009 12:10:30 version -- scripts/common.sh@366 -- # decimal 2 00:07:00.009 12:10:30 version -- scripts/common.sh@353 -- # local d=2 00:07:00.009 12:10:30 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.009 12:10:30 version -- scripts/common.sh@355 -- # echo 2 00:07:00.009 12:10:30 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.009 12:10:30 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.009 12:10:30 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.009 12:10:30 version -- scripts/common.sh@368 -- # return 0 00:07:00.009 12:10:30 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.009 12:10:30 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.009 --rc genhtml_branch_coverage=1 00:07:00.009 --rc genhtml_function_coverage=1 00:07:00.009 --rc genhtml_legend=1 00:07:00.009 --rc geninfo_all_blocks=1 00:07:00.009 --rc geninfo_unexecuted_blocks=1 00:07:00.009 00:07:00.009 ' 00:07:00.009 12:10:30 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.009 --rc genhtml_branch_coverage=1 00:07:00.009 --rc genhtml_function_coverage=1 00:07:00.009 --rc genhtml_legend=1 00:07:00.009 --rc geninfo_all_blocks=1 00:07:00.009 --rc geninfo_unexecuted_blocks=1 00:07:00.009 00:07:00.009 ' 00:07:00.009 12:10:30 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.009 --rc genhtml_branch_coverage=1 00:07:00.009 --rc genhtml_function_coverage=1 00:07:00.009 --rc genhtml_legend=1 00:07:00.009 --rc geninfo_all_blocks=1 00:07:00.009 --rc geninfo_unexecuted_blocks=1 00:07:00.009 00:07:00.009 ' 00:07:00.009 12:10:30 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.009 --rc genhtml_branch_coverage=1 00:07:00.009 --rc genhtml_function_coverage=1 00:07:00.009 --rc genhtml_legend=1 00:07:00.009 --rc geninfo_all_blocks=1 00:07:00.009 --rc geninfo_unexecuted_blocks=1 00:07:00.009 00:07:00.009 ' 00:07:00.009 12:10:30 version -- app/version.sh@17 -- # get_header_version major 00:07:00.009 12:10:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.009 12:10:30 version -- app/version.sh@14 -- # cut -f2 00:07:00.009 12:10:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.009 12:10:30 version -- app/version.sh@17 -- # major=25 00:07:00.009 12:10:30 version -- app/version.sh@18 -- # get_header_version minor 00:07:00.009 12:10:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.009 12:10:30 version -- app/version.sh@14 -- # cut -f2 00:07:00.009 12:10:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.009 12:10:30 version -- app/version.sh@18 -- # minor=1 00:07:00.009 12:10:30 version -- app/version.sh@19 -- # get_header_version patch 00:07:00.009 12:10:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.009 12:10:30 version -- app/version.sh@14 -- # cut -f2 00:07:00.009 12:10:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.009 12:10:30 version -- app/version.sh@19 -- # patch=0 00:07:00.009 12:10:30 version -- app/version.sh@20 -- # get_header_version suffix 00:07:00.009 12:10:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.009 12:10:30 version -- app/version.sh@14 -- # cut -f2 00:07:00.009 12:10:30 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.009 12:10:30 version -- app/version.sh@20 -- # suffix=-pre 00:07:00.009 12:10:30 version -- app/version.sh@22 -- # version=25.1 00:07:00.009 12:10:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:00.009 12:10:30 version -- app/version.sh@28 -- # version=25.1rc0 00:07:00.009 12:10:30 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:00.009 12:10:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:00.009 12:10:30 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:00.009 12:10:30 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:00.009 00:07:00.009 real 0m0.195s 00:07:00.009 user 0m0.119s 00:07:00.009 sys 0m0.106s 00:07:00.009 ************************************ 00:07:00.009 END TEST version 00:07:00.009 ************************************ 00:07:00.009 12:10:30 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.009 12:10:30 version -- common/autotest_common.sh@10 -- # set +x 00:07:00.009 12:10:30 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:00.009 12:10:30 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:00.009 12:10:30 -- spdk/autotest.sh@194 -- # uname -s 00:07:00.009 12:10:30 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:00.009 12:10:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:00.009 12:10:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:00.009 12:10:30 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:00.009 12:10:30 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:00.009 12:10:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.009 12:10:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.009 12:10:30 -- common/autotest_common.sh@10 -- # set +x 00:07:00.009 ************************************ 00:07:00.009 START TEST blockdev_nvme 00:07:00.009 ************************************ 00:07:00.009 12:10:30 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:00.009 * Looking for test storage... 00:07:00.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:00.009 12:10:30 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.009 12:10:30 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.009 12:10:30 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.009 12:10:30 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.009 12:10:30 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.009 12:10:30 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.009 12:10:30 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:00.010 12:10:30 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.271 12:10:30 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:00.271 12:10:30 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.271 12:10:30 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:00.271 12:10:30 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:00.271 12:10:30 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.271 12:10:30 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:00.271 12:10:30 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.271 12:10:30 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.271 12:10:30 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.271 12:10:30 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:00.271 12:10:30 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.271 12:10:30 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.271 --rc genhtml_branch_coverage=1 00:07:00.271 --rc genhtml_function_coverage=1 00:07:00.271 --rc genhtml_legend=1 00:07:00.271 --rc geninfo_all_blocks=1 00:07:00.271 --rc geninfo_unexecuted_blocks=1 00:07:00.271 00:07:00.271 ' 00:07:00.271 12:10:30 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.271 --rc genhtml_branch_coverage=1 00:07:00.271 --rc genhtml_function_coverage=1 00:07:00.271 --rc genhtml_legend=1 00:07:00.271 --rc geninfo_all_blocks=1 00:07:00.271 --rc geninfo_unexecuted_blocks=1 00:07:00.271 00:07:00.271 ' 00:07:00.271 12:10:30 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.271 --rc genhtml_branch_coverage=1 00:07:00.271 --rc genhtml_function_coverage=1 00:07:00.271 --rc genhtml_legend=1 00:07:00.271 --rc geninfo_all_blocks=1 00:07:00.271 --rc geninfo_unexecuted_blocks=1 00:07:00.271 00:07:00.271 ' 00:07:00.271 12:10:30 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.271 --rc genhtml_branch_coverage=1 00:07:00.271 --rc genhtml_function_coverage=1 00:07:00.271 --rc genhtml_legend=1 00:07:00.271 --rc geninfo_all_blocks=1 00:07:00.271 --rc geninfo_unexecuted_blocks=1 00:07:00.271 00:07:00.271 ' 00:07:00.271 12:10:30 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:00.271 12:10:30 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:00.271 12:10:30 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:00.271 12:10:30 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:00.271 12:10:30 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:00.271 12:10:30 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:00.271 12:10:30 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:00.271 12:10:30 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:00.271 12:10:30 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:00.271 12:10:30 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:00.271 12:10:30 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:00.271 12:10:30 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:00.271 12:10:30 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:07:00.271 12:10:30 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:00.272 12:10:30 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:00.272 12:10:30 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:07:00.272 12:10:30 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:00.272 12:10:30 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:07:00.272 12:10:30 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:00.272 12:10:30 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:00.272 12:10:30 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:00.272 12:10:30 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:07:00.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.272 12:10:30 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:07:00.272 12:10:30 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:00.272 12:10:30 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59994 00:07:00.272 12:10:30 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:00.272 12:10:30 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59994 00:07:00.272 12:10:30 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59994 ']' 00:07:00.272 12:10:30 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.272 12:10:30 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.272 12:10:30 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:00.272 12:10:30 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.272 12:10:30 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.272 12:10:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.272 [2024-12-05 12:10:30.974716] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:00.272 [2024-12-05 12:10:30.974844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59994 ] 00:07:00.272 [2024-12-05 12:10:31.133264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.532 [2024-12-05 12:10:31.245395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.097 12:10:31 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.097 12:10:31 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:01.097 12:10:31 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:01.097 12:10:31 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:07:01.097 12:10:31 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:01.097 12:10:31 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:01.097 12:10:31 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:01.097 12:10:31 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:01.097 12:10:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.097 12:10:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.353 12:10:32 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.353 12:10:32 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:01.353 12:10:32 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.353 12:10:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.611 12:10:32 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.611 12:10:32 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:07:01.611 12:10:32 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:01.611 12:10:32 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.611 12:10:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.611 12:10:32 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.611 12:10:32 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:01.611 12:10:32 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.611 12:10:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.611 12:10:32 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.611 12:10:32 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:01.611 12:10:32 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.611 12:10:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.611 12:10:32 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.611 12:10:32 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:01.611 12:10:32 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:01.611 12:10:32 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.611 12:10:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.611 12:10:32 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:01.611 12:10:32 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.611 12:10:32 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:01.611 12:10:32 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:01.612 12:10:32 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "73997d15-91b8-47fe-bf21-289bdbd76750"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "73997d15-91b8-47fe-bf21-289bdbd76750",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "e3ec87c8-f17b-4e02-a324-9043f32a2280"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e3ec87c8-f17b-4e02-a324-9043f32a2280",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a908d6ef-889d-4a2a-9e78-09be93d96668"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a908d6ef-889d-4a2a-9e78-09be93d96668",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ad0e4b51-96c3-4b0c-9b6f-bb83fc6dce69"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ad0e4b51-96c3-4b0c-9b6f-bb83fc6dce69",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "88e2a80d-6a36-47c0-9a57-b5c77d7ce0c7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "88e2a80d-6a36-47c0-9a57-b5c77d7ce0c7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9d35c4d0-bf17-439c-84c0-1555fb91e58e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9d35c4d0-bf17-439c-84c0-1555fb91e58e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:01.612 12:10:32 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:01.612 12:10:32 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:01.612 12:10:32 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:01.612 12:10:32 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59994 00:07:01.612 12:10:32 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59994 ']' 00:07:01.612 12:10:32 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59994 00:07:01.612 12:10:32 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:01.612 12:10:32 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.612 12:10:32 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59994 00:07:01.612 killing process with pid 59994 00:07:01.612 12:10:32 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.612 12:10:32 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.612 12:10:32 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59994' 00:07:01.612 12:10:32 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59994 00:07:01.612 12:10:32 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59994 00:07:03.516 12:10:33 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:03.516 12:10:33 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:03.516 12:10:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:03.516 12:10:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.516 12:10:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.516 ************************************ 00:07:03.516 START TEST bdev_hello_world 00:07:03.516 ************************************ 00:07:03.516 12:10:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:03.516 [2024-12-05 12:10:34.064812] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:03.516 [2024-12-05 12:10:34.064949] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60078 ] 00:07:03.516 [2024-12-05 12:10:34.228502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.516 [2024-12-05 12:10:34.329443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.083 [2024-12-05 12:10:34.879450] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:04.083 [2024-12-05 12:10:34.879521] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:04.083 [2024-12-05 12:10:34.879543] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:04.083 [2024-12-05 12:10:34.882116] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:04.083 [2024-12-05 12:10:34.882666] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:04.083 [2024-12-05 12:10:34.882690] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:04.083 [2024-12-05 12:10:34.882860] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:04.083 00:07:04.083 [2024-12-05 12:10:34.882879] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:05.020 ************************************ 00:07:05.020 END TEST bdev_hello_world 00:07:05.020 ************************************ 00:07:05.020 00:07:05.020 real 0m1.644s 00:07:05.020 user 0m1.371s 00:07:05.020 sys 0m0.166s 00:07:05.020 12:10:35 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.020 12:10:35 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:05.020 12:10:35 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:05.020 12:10:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.020 12:10:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.020 12:10:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.020 ************************************ 00:07:05.020 START TEST bdev_bounds 00:07:05.020 ************************************ 00:07:05.020 12:10:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:05.020 12:10:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60114 00:07:05.020 12:10:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.020 12:10:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60114' 00:07:05.020 Process bdevio pid: 60114 00:07:05.020 12:10:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:05.020 12:10:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60114 00:07:05.020 12:10:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 60114 ']' 00:07:05.020 12:10:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.020 12:10:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.020 12:10:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.020 12:10:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.020 12:10:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:05.020 [2024-12-05 12:10:35.747128] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:05.020 [2024-12-05 12:10:35.747240] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60114 ] 00:07:05.281 [2024-12-05 12:10:35.902710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.281 [2024-12-05 12:10:36.024924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.281 [2024-12-05 12:10:36.025166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.281 [2024-12-05 12:10:36.025284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.853 12:10:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.853 12:10:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:05.853 12:10:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:05.853 I/O targets: 00:07:05.853 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:05.853 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:05.853 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:05.853 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:05.853 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:05.853 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:05.853 00:07:05.853 00:07:05.853 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.853 http://cunit.sourceforge.net/ 00:07:05.853 00:07:05.853 00:07:05.853 Suite: bdevio tests on: Nvme3n1 00:07:06.116 Test: blockdev write read block ...passed 00:07:06.116 Test: blockdev write zeroes read block ...passed 00:07:06.116 Test: blockdev write zeroes read no split ...passed 00:07:06.116 Test: blockdev write zeroes read split ...passed 00:07:06.116 Test: blockdev write zeroes read split partial ...passed 00:07:06.116 Test: blockdev reset ...[2024-12-05 12:10:36.769389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:06.116 [2024-12-05 12:10:36.774072] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:06.116 passed 00:07:06.116 Test: blockdev write read 8 blocks ...passed 00:07:06.116 Test: blockdev write read size > 128k ...passed 00:07:06.116 Test: blockdev write read invalid size ...passed 00:07:06.116 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.116 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.116 Test: blockdev write read max offset ...passed 00:07:06.116 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.116 Test: blockdev writev readv 8 blocks ...passed 00:07:06.116 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.116 Test: blockdev writev readv block ...passed 00:07:06.116 Test: blockdev writev readv size > 128k ...passed 00:07:06.116 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.116 Test: blockdev comparev and writev ...[2024-12-05 12:10:36.793819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba40a000 len:0x1000 00:07:06.116 [2024-12-05 12:10:36.793877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.116 passed 00:07:06.116 Test: blockdev nvme passthru rw ...passed 00:07:06.116 Test: blockdev nvme passthru vendor specific ...passed 00:07:06.116 Test: blockdev nvme admin passthru ...[2024-12-05 12:10:36.796486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.116 [2024-12-05 12:10:36.796524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.116 passed 00:07:06.116 Test: blockdev copy ...passed 00:07:06.116 Suite: bdevio tests on: Nvme2n3 00:07:06.116 Test: blockdev write read block ...passed 00:07:06.116 Test: blockdev write zeroes read block ...passed 00:07:06.116 Test: blockdev write zeroes read no split ...passed 00:07:06.116 Test: blockdev write zeroes read split ...passed 00:07:06.116 Test: blockdev write zeroes read split partial ...passed 00:07:06.116 Test: blockdev reset ...[2024-12-05 12:10:36.852663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:06.116 [2024-12-05 12:10:36.856114] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:06.116 Test: blockdev write read 8 blocks ...uccessful. 00:07:06.116 passed 00:07:06.116 Test: blockdev write read size > 128k ...passed 00:07:06.116 Test: blockdev write read invalid size ...passed 00:07:06.116 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.116 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.116 Test: blockdev write read max offset ...passed 00:07:06.116 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.116 Test: blockdev writev readv 8 blocks ...passed 00:07:06.116 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.116 Test: blockdev writev readv block ...passed 00:07:06.116 Test: blockdev writev readv size > 128k ...passed 00:07:06.116 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.116 Test: blockdev comparev and writev ...[2024-12-05 12:10:36.874955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295c06000 len:0x1000 00:07:06.116 [2024-12-05 12:10:36.875001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.116 passed 00:07:06.116 Test: blockdev nvme passthru rw ...passed 00:07:06.116 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:10:36.878147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.116 [2024-12-05 12:10:36.878181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.116 passed 00:07:06.116 Test: blockdev nvme admin passthru ...passed 00:07:06.116 Test: blockdev copy ...passed 00:07:06.117 Suite: bdevio tests on: Nvme2n2 00:07:06.117 Test: blockdev write read block ...passed 00:07:06.117 Test: blockdev write zeroes read block ...passed 00:07:06.117 Test: blockdev write zeroes read no split ...passed 00:07:06.117 Test: blockdev write zeroes read split ...passed 00:07:06.117 Test: blockdev write zeroes read split partial ...passed 00:07:06.117 Test: blockdev reset ...[2024-12-05 12:10:36.935959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:06.117 [2024-12-05 12:10:36.939865] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:06.117 passed 00:07:06.117 Test: blockdev write read 8 blocks ...passed 00:07:06.117 Test: blockdev write read size > 128k ...passed 00:07:06.117 Test: blockdev write read invalid size ...passed 00:07:06.117 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.117 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.117 Test: blockdev write read max offset ...passed 00:07:06.117 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.117 Test: blockdev writev readv 8 blocks ...passed 00:07:06.117 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.117 Test: blockdev writev readv block ...passed 00:07:06.117 Test: blockdev writev readv size > 128k ...passed 00:07:06.117 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.117 Test: blockdev comparev and writev ...[2024-12-05 12:10:36.959623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c363c000 len:0x1000 00:07:06.117 [2024-12-05 12:10:36.959670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.117 passed 00:07:06.117 Test: blockdev nvme passthru rw ...passed 00:07:06.117 Test: blockdev nvme passthru vendor specific ...passed 00:07:06.117 Test: blockdev nvme admin passthru ...[2024-12-05 12:10:36.962048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.117 [2024-12-05 12:10:36.962078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.117 passed 00:07:06.117 Test: blockdev copy ...passed 00:07:06.117 Suite: bdevio tests on: Nvme2n1 00:07:06.117 Test: blockdev write read block ...passed 00:07:06.117 Test: blockdev write zeroes read block ...passed 00:07:06.117 Test: blockdev write zeroes read no split ...passed 00:07:06.379 Test: blockdev write zeroes read split ...passed 00:07:06.380 Test: blockdev write zeroes read split partial ...passed 00:07:06.380 Test: blockdev reset ...[2024-12-05 12:10:37.018859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:06.380 [2024-12-05 12:10:37.022057] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:07:06.380 00:07:06.380 Test: blockdev write read 8 blocks ...passed 00:07:06.380 Test: blockdev write read size > 128k ...passed 00:07:06.380 Test: blockdev write read invalid size ...passed 00:07:06.380 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.380 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.380 Test: blockdev write read max offset ...passed 00:07:06.380 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.380 Test: blockdev writev readv 8 blocks ...passed 00:07:06.380 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.380 Test: blockdev writev readv block ...passed 00:07:06.380 Test: blockdev writev readv size > 128k ...passed 00:07:06.380 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.380 Test: blockdev comparev and writev ...[2024-12-05 12:10:37.038374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3638000 len:0x1000 00:07:06.380 [2024-12-05 12:10:37.038421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.380 passed 00:07:06.380 Test: blockdev nvme passthru rw ...passed 00:07:06.380 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:10:37.041176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.380 [2024-12-05 12:10:37.041206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.380 passed 00:07:06.380 Test: blockdev nvme admin passthru ...passed 00:07:06.380 Test: blockdev copy ...passed 00:07:06.380 Suite: bdevio tests on: Nvme1n1 00:07:06.380 Test: blockdev write read block ...passed 00:07:06.380 Test: blockdev write zeroes read block ...passed 00:07:06.380 Test: blockdev write zeroes read no split ...passed 00:07:06.380 Test: blockdev write zeroes read split ...passed 00:07:06.380 Test: blockdev write zeroes read split partial ...passed 00:07:06.380 Test: blockdev reset ...[2024-12-05 12:10:37.102882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:06.380 [2024-12-05 12:10:37.105638] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:07:06.380 Test: blockdev write read 8 blocks ...uccessful. 00:07:06.380 passed 00:07:06.380 Test: blockdev write read size > 128k ...passed 00:07:06.380 Test: blockdev write read invalid size ...passed 00:07:06.380 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.380 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.380 Test: blockdev write read max offset ...passed 00:07:06.380 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.380 Test: blockdev writev readv 8 blocks ...passed 00:07:06.380 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.380 Test: blockdev writev readv block ...passed 00:07:06.380 Test: blockdev writev readv size > 128k ...passed 00:07:06.380 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.380 Test: blockdev comparev and writev ...[2024-12-05 12:10:37.122614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3634000 len:0x1000 00:07:06.380 [2024-12-05 12:10:37.122834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:06.380 passed 00:07:06.380 Test: blockdev nvme passthru rw ...passed 00:07:06.380 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:10:37.125870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:06.380 [2024-12-05 12:10:37.126009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:06.380 passed 00:07:06.380 Test: blockdev nvme admin passthru ...passed 00:07:06.380 Test: blockdev copy ...passed 00:07:06.380 Suite: bdevio tests on: Nvme0n1 00:07:06.380 Test: blockdev write read block ...passed 00:07:06.380 Test: blockdev write zeroes read block ...passed 00:07:06.380 Test: blockdev write zeroes read no split ...passed 00:07:06.380 Test: blockdev write zeroes read split ...passed 00:07:06.380 Test: blockdev write zeroes read split partial ...passed 00:07:06.380 Test: blockdev reset ...[2024-12-05 12:10:37.188201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:06.380 [2024-12-05 12:10:37.192152] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:06.380 passed 00:07:06.380 Test: blockdev write read 8 blocks ...passed 00:07:06.380 Test: blockdev write read size > 128k ...passed 00:07:06.380 Test: blockdev write read invalid size ...passed 00:07:06.380 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:06.380 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:06.380 Test: blockdev write read max offset ...passed 00:07:06.380 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:06.380 Test: blockdev writev readv 8 blocks ...passed 00:07:06.380 Test: blockdev writev readv 30 x 1block ...passed 00:07:06.380 Test: blockdev writev readv block ...passed 00:07:06.380 Test: blockdev writev readv size > 128k ...passed 00:07:06.380 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:06.380 Test: blockdev comparev and writev ...passed 00:07:06.380 Test: blockdev nvme passthru rw ...[2024-12-05 12:10:37.211929] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:06.380 separate metadata which is not supported yet. 00:07:06.380 passed 00:07:06.380 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:10:37.213862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:06.380 [2024-12-05 12:10:37.213915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:06.380 passed 00:07:06.380 Test: blockdev nvme admin passthru ...passed 00:07:06.380 Test: blockdev copy ...passed 00:07:06.380 00:07:06.380 Run Summary: Type Total Ran Passed Failed Inactive 00:07:06.380 suites 6 6 n/a 0 0 00:07:06.380 tests 138 138 138 0 0 00:07:06.380 asserts 893 893 893 0 n/a 00:07:06.380 00:07:06.380 Elapsed time = 1.262 seconds 00:07:06.380 0 00:07:06.380 12:10:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60114 00:07:06.380 12:10:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 60114 ']' 00:07:06.380 12:10:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 60114 00:07:06.380 12:10:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:06.641 12:10:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.641 12:10:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60114 00:07:06.641 12:10:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.641 12:10:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.641 12:10:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60114' 00:07:06.641 killing process with pid 60114 00:07:06.641 12:10:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 60114 00:07:06.641 12:10:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 60114 00:07:07.214 12:10:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:07.214 00:07:07.214 real 0m2.303s 00:07:07.214 user 0m5.764s 00:07:07.214 sys 0m0.324s 00:07:07.214 12:10:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.214 12:10:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:07.214 ************************************ 00:07:07.214 END TEST bdev_bounds 00:07:07.214 ************************************ 00:07:07.214 12:10:38 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:07.214 12:10:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:07.214 12:10:38 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.214 12:10:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.214 ************************************ 00:07:07.214 START TEST bdev_nbd 00:07:07.214 ************************************ 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60174 00:07:07.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60174 /var/tmp/spdk-nbd.sock 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60174 ']' 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.214 12:10:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:07.475 [2024-12-05 12:10:38.122486] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:07.475 [2024-12-05 12:10:38.122818] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.475 [2024-12-05 12:10:38.291268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.737 [2024-12-05 12:10:38.408323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.308 12:10:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.308 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:08.308 12:10:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:08.308 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.308 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.308 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:08.308 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:08.308 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.308 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.308 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:08.308 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:08.308 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:08.308 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:08.308 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:08.308 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.569 1+0 records in 00:07:08.569 1+0 records out 00:07:08.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732375 s, 5.6 MB/s 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:08.569 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:08.829 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:08.829 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:08.829 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:08.829 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.830 1+0 records in 00:07:08.830 1+0 records out 00:07:08.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690578 s, 5.9 MB/s 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:08.830 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.091 1+0 records in 00:07:09.091 1+0 records out 00:07:09.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820584 s, 5.0 MB/s 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.091 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.353 1+0 records in 00:07:09.353 1+0 records out 00:07:09.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127601 s, 3.2 MB/s 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.353 12:10:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:09.353 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:09.353 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.613 1+0 records in 00:07:09.613 1+0 records out 00:07:09.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731146 s, 5.6 MB/s 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.613 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.613 1+0 records in 00:07:09.613 1+0 records out 00:07:09.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734748 s, 5.6 MB/s 00:07:09.614 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.614 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.614 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.614 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.614 12:10:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.614 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.614 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.614 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.874 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:09.874 { 00:07:09.874 "nbd_device": "/dev/nbd0", 00:07:09.874 "bdev_name": "Nvme0n1" 00:07:09.874 }, 00:07:09.874 { 00:07:09.874 "nbd_device": "/dev/nbd1", 00:07:09.874 "bdev_name": "Nvme1n1" 00:07:09.874 }, 00:07:09.874 { 00:07:09.874 "nbd_device": "/dev/nbd2", 00:07:09.874 "bdev_name": "Nvme2n1" 00:07:09.874 }, 00:07:09.874 { 00:07:09.874 "nbd_device": "/dev/nbd3", 00:07:09.874 "bdev_name": "Nvme2n2" 00:07:09.874 }, 00:07:09.874 { 00:07:09.874 "nbd_device": "/dev/nbd4", 00:07:09.874 "bdev_name": "Nvme2n3" 00:07:09.874 }, 00:07:09.874 { 00:07:09.874 "nbd_device": "/dev/nbd5", 00:07:09.874 "bdev_name": "Nvme3n1" 00:07:09.874 } 00:07:09.874 ]' 00:07:09.874 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:09.874 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:09.874 { 00:07:09.874 "nbd_device": "/dev/nbd0", 00:07:09.874 "bdev_name": "Nvme0n1" 00:07:09.874 }, 00:07:09.874 { 00:07:09.874 "nbd_device": "/dev/nbd1", 00:07:09.874 "bdev_name": "Nvme1n1" 00:07:09.874 }, 00:07:09.874 { 00:07:09.874 "nbd_device": "/dev/nbd2", 00:07:09.874 "bdev_name": "Nvme2n1" 00:07:09.874 }, 00:07:09.874 { 00:07:09.874 "nbd_device": "/dev/nbd3", 00:07:09.874 "bdev_name": "Nvme2n2" 00:07:09.874 }, 00:07:09.874 { 00:07:09.874 "nbd_device": "/dev/nbd4", 00:07:09.874 "bdev_name": "Nvme2n3" 00:07:09.874 }, 00:07:09.874 { 00:07:09.874 "nbd_device": "/dev/nbd5", 00:07:09.874 "bdev_name": "Nvme3n1" 00:07:09.874 } 00:07:09.874 ]' 00:07:09.874 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:09.874 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:09.874 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.874 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:09.874 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.874 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:09.874 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.874 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.135 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.135 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.135 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.135 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.135 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.135 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.135 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.135 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.135 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.135 12:10:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.396 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.396 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.396 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.396 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.396 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.397 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.397 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.397 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.397 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.397 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:10.657 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:10.657 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:10.657 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:10.657 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.657 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.657 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:10.657 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.657 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.657 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.657 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:10.938 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.939 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:11.200 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:11.200 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:11.200 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:11.200 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.200 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.200 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:11.200 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.200 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.201 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.201 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.201 12:10:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:11.460 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:11.721 /dev/nbd0 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.721 1+0 records in 00:07:11.721 1+0 records out 00:07:11.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000884578 s, 4.6 MB/s 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:11.721 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:11.983 /dev/nbd1 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.983 1+0 records in 00:07:11.983 1+0 records out 00:07:11.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000709378 s, 5.8 MB/s 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:11.983 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:12.244 /dev/nbd10 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.244 1+0 records in 00:07:12.244 1+0 records out 00:07:12.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805232 s, 5.1 MB/s 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.244 12:10:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:12.504 /dev/nbd11 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.504 1+0 records in 00:07:12.504 1+0 records out 00:07:12.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124669 s, 3.3 MB/s 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.504 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:12.766 /dev/nbd12 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.766 1+0 records in 00:07:12.766 1+0 records out 00:07:12.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536162 s, 7.6 MB/s 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.766 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:13.027 /dev/nbd13 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.027 1+0 records in 00:07:13.027 1+0 records out 00:07:13.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000937902 s, 4.4 MB/s 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.027 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.288 { 00:07:13.288 "nbd_device": "/dev/nbd0", 00:07:13.288 "bdev_name": "Nvme0n1" 00:07:13.288 }, 00:07:13.288 { 00:07:13.288 "nbd_device": "/dev/nbd1", 00:07:13.288 "bdev_name": "Nvme1n1" 00:07:13.288 }, 00:07:13.288 { 00:07:13.288 "nbd_device": "/dev/nbd10", 00:07:13.288 "bdev_name": "Nvme2n1" 00:07:13.288 }, 00:07:13.288 { 00:07:13.288 "nbd_device": "/dev/nbd11", 00:07:13.288 "bdev_name": "Nvme2n2" 00:07:13.288 }, 00:07:13.288 { 00:07:13.288 "nbd_device": "/dev/nbd12", 00:07:13.288 "bdev_name": "Nvme2n3" 00:07:13.288 }, 00:07:13.288 { 00:07:13.288 "nbd_device": "/dev/nbd13", 00:07:13.288 "bdev_name": "Nvme3n1" 00:07:13.288 } 00:07:13.288 ]' 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.288 { 00:07:13.288 "nbd_device": "/dev/nbd0", 00:07:13.288 "bdev_name": "Nvme0n1" 00:07:13.288 }, 00:07:13.288 { 00:07:13.288 "nbd_device": "/dev/nbd1", 00:07:13.288 "bdev_name": "Nvme1n1" 00:07:13.288 }, 00:07:13.288 { 00:07:13.288 "nbd_device": "/dev/nbd10", 00:07:13.288 "bdev_name": "Nvme2n1" 00:07:13.288 }, 00:07:13.288 { 00:07:13.288 "nbd_device": "/dev/nbd11", 00:07:13.288 "bdev_name": "Nvme2n2" 00:07:13.288 }, 00:07:13.288 { 00:07:13.288 "nbd_device": "/dev/nbd12", 00:07:13.288 "bdev_name": "Nvme2n3" 00:07:13.288 }, 00:07:13.288 { 00:07:13.288 "nbd_device": "/dev/nbd13", 00:07:13.288 "bdev_name": "Nvme3n1" 00:07:13.288 } 00:07:13.288 ]' 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:13.288 /dev/nbd1 00:07:13.288 /dev/nbd10 00:07:13.288 /dev/nbd11 00:07:13.288 /dev/nbd12 00:07:13.288 /dev/nbd13' 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:13.288 /dev/nbd1 00:07:13.288 /dev/nbd10 00:07:13.288 /dev/nbd11 00:07:13.288 /dev/nbd12 00:07:13.288 /dev/nbd13' 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:13.288 256+0 records in 00:07:13.288 256+0 records out 00:07:13.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00666752 s, 157 MB/s 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.288 12:10:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:13.288 256+0 records in 00:07:13.288 256+0 records out 00:07:13.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154892 s, 6.8 MB/s 00:07:13.288 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.288 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:13.548 256+0 records in 00:07:13.548 256+0 records out 00:07:13.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152996 s, 6.9 MB/s 00:07:13.548 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.548 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:13.809 256+0 records in 00:07:13.809 256+0 records out 00:07:13.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158359 s, 6.6 MB/s 00:07:13.809 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.809 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:13.809 256+0 records in 00:07:13.809 256+0 records out 00:07:13.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146485 s, 7.2 MB/s 00:07:13.809 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.809 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:14.069 256+0 records in 00:07:14.069 256+0 records out 00:07:14.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153748 s, 6.8 MB/s 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:14.069 256+0 records in 00:07:14.069 256+0 records out 00:07:14.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156894 s, 6.7 MB/s 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.069 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:14.330 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.330 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:14.330 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.330 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:14.330 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:14.330 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:14.330 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.330 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:14.330 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.330 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:14.330 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.330 12:10:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.330 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.330 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.330 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.330 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.330 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.330 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.330 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.330 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.330 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.330 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.589 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.589 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.589 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.589 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.589 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.589 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.589 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.589 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.589 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.589 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:14.850 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:14.850 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:14.850 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:14.850 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.850 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.850 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:14.850 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.850 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.850 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.850 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:15.108 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:15.108 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:15.108 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:15.108 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.108 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.108 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:15.108 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.108 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.108 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.108 12:10:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:15.367 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:15.367 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:15.367 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:15.367 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.367 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.367 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:15.367 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.367 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.367 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.367 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.627 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.887 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.887 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.887 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:15.887 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.887 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.887 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:15.887 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:15.887 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:15.887 12:10:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:15.887 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.887 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:15.887 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:15.887 malloc_lvol_verify 00:07:15.887 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:16.147 738c4e93-a4ed-47ef-81a6-253ad08311ac 00:07:16.147 12:10:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:16.407 18fbf25a-4164-4f24-ac43-c234d4707e00 00:07:16.407 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:16.669 /dev/nbd0 00:07:16.669 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:16.669 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:16.669 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:16.669 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:16.669 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:16.669 mke2fs 1.47.0 (5-Feb-2023) 00:07:16.669 Discarding device blocks: 0/4096 done 00:07:16.669 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:16.669 00:07:16.669 Allocating group tables: 0/1 done 00:07:16.669 Writing inode tables: 0/1 done 00:07:16.669 Creating journal (1024 blocks): done 00:07:16.669 Writing superblocks and filesystem accounting information: 0/1 done 00:07:16.669 00:07:16.669 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:16.669 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.669 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:16.669 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.669 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:16.669 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.669 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60174 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60174 ']' 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60174 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60174 00:07:16.930 killing process with pid 60174 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60174' 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60174 00:07:16.930 12:10:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60174 00:07:21.132 12:10:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:21.132 00:07:21.132 real 0m13.215s 00:07:21.132 user 0m16.594s 00:07:21.132 sys 0m3.993s 00:07:21.132 12:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.132 ************************************ 00:07:21.132 END TEST bdev_nbd 00:07:21.132 ************************************ 00:07:21.132 12:10:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:21.132 12:10:51 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:21.132 12:10:51 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:07:21.132 skipping fio tests on NVMe due to multi-ns failures. 00:07:21.132 12:10:51 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:21.132 12:10:51 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:21.132 12:10:51 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:21.132 12:10:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:21.132 12:10:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.132 12:10:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.132 ************************************ 00:07:21.132 START TEST bdev_verify 00:07:21.132 ************************************ 00:07:21.132 12:10:51 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:21.132 [2024-12-05 12:10:51.389638] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:21.132 [2024-12-05 12:10:51.389770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60570 ] 00:07:21.132 [2024-12-05 12:10:51.550333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.132 [2024-12-05 12:10:51.669232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.132 [2024-12-05 12:10:51.669330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.705 Running I/O for 5 seconds... 00:07:24.037 21439.00 IOPS, 83.75 MiB/s [2024-12-05T12:10:55.857Z] 21310.00 IOPS, 83.24 MiB/s [2024-12-05T12:10:56.795Z] 21121.33 IOPS, 82.51 MiB/s [2024-12-05T12:10:57.737Z] 21104.25 IOPS, 82.44 MiB/s [2024-12-05T12:10:57.737Z] 21157.40 IOPS, 82.65 MiB/s 00:07:26.868 Latency(us) 00:07:26.868 [2024-12-05T12:10:57.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.868 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.868 Verification LBA range: start 0x0 length 0xbd0bd 00:07:26.868 Nvme0n1 : 5.05 1725.00 6.74 0.00 0.00 73875.94 14317.10 85499.27 00:07:26.868 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.868 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:26.868 Nvme0n1 : 5.06 1746.97 6.82 0.00 0.00 73013.59 18753.38 87112.47 00:07:26.868 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.868 Verification LBA range: start 0x0 length 0xa0000 00:07:26.868 Nvme1n1 : 5.08 1726.39 6.74 0.00 0.00 73531.38 11947.72 77433.30 00:07:26.868 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.868 Verification LBA range: start 0xa0000 length 0xa0000 00:07:26.868 Nvme1n1 : 5.06 1746.48 6.82 0.00 0.00 72879.28 17543.48 81869.59 00:07:26.868 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.868 Verification LBA range: start 0x0 length 0x80000 00:07:26.868 Nvme2n1 : 5.09 1734.58 6.78 0.00 0.00 73245.65 10435.35 75416.81 00:07:26.868 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.868 Verification LBA range: start 0x80000 length 0x80000 00:07:26.868 Nvme2n1 : 5.08 1752.56 6.85 0.00 0.00 72449.77 11090.71 77433.30 00:07:26.868 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.868 Verification LBA range: start 0x0 length 0x80000 00:07:26.868 Nvme2n2 : 5.09 1733.54 6.77 0.00 0.00 73127.89 12401.43 72593.72 00:07:26.868 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.868 Verification LBA range: start 0x80000 length 0x80000 00:07:26.868 Nvme2n2 : 5.09 1748.80 6.83 0.00 0.00 72528.02 11241.94 83079.48 00:07:26.868 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.868 Verification LBA range: start 0x0 length 0x80000 00:07:26.868 Nvme2n3 : 5.10 1733.08 6.77 0.00 0.00 72992.27 12552.66 69367.34 00:07:26.868 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.868 Verification LBA range: start 0x80000 length 0x80000 00:07:26.868 Nvme2n3 : 5.09 1747.76 6.83 0.00 0.00 72399.84 10586.58 83482.78 00:07:26.868 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.868 Verification LBA range: start 0x0 length 0x20000 00:07:26.868 Nvme3n1 : 5.10 1732.62 6.77 0.00 0.00 72870.44 12754.31 65737.65 00:07:26.868 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.868 Verification LBA range: start 0x20000 length 0x20000 00:07:26.868 Nvme3n1 : 5.09 1747.93 6.83 0.00 0.00 72247.92 10384.94 85499.27 00:07:26.868 [2024-12-05T12:10:57.737Z] =================================================================================================================== 00:07:26.868 [2024-12-05T12:10:57.737Z] Total : 20875.70 81.55 0.00 0.00 72927.42 10384.94 87112.47 00:07:32.155 00:07:32.155 real 0m11.558s 00:07:32.155 user 0m22.038s 00:07:32.155 sys 0m0.320s 00:07:32.156 12:11:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.156 12:11:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:32.156 ************************************ 00:07:32.156 END TEST bdev_verify 00:07:32.156 ************************************ 00:07:32.156 12:11:02 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:32.156 12:11:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:32.156 12:11:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.156 12:11:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:32.156 ************************************ 00:07:32.156 START TEST bdev_verify_big_io 00:07:32.156 ************************************ 00:07:32.156 12:11:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:32.156 [2024-12-05 12:11:03.015130] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:32.156 [2024-12-05 12:11:03.015264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60669 ] 00:07:32.415 [2024-12-05 12:11:03.177691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.678 [2024-12-05 12:11:03.290149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.678 [2024-12-05 12:11:03.290268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.248 Running I/O for 5 seconds... 00:07:37.166 543.00 IOPS, 33.94 MiB/s [2024-12-05T12:11:10.029Z] 1018.50 IOPS, 63.66 MiB/s [2024-12-05T12:11:10.290Z] 1665.67 IOPS, 104.10 MiB/s 00:07:39.421 Latency(us) 00:07:39.421 [2024-12-05T12:11:10.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.421 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.421 Verification LBA range: start 0x0 length 0xbd0b 00:07:39.421 Nvme0n1 : 5.83 108.49 6.78 0.00 0.00 1131653.81 18551.73 1277649.53 00:07:39.421 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.421 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:39.421 Nvme0n1 : 5.92 96.02 6.00 0.00 0.00 1249507.39 13913.80 1729343.80 00:07:39.421 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.421 Verification LBA range: start 0x0 length 0xa000 00:07:39.421 Nvme1n1 : 5.84 109.64 6.85 0.00 0.00 1081936.34 79449.80 1071160.71 00:07:39.421 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.421 Verification LBA range: start 0xa000 length 0xa000 00:07:39.421 Nvme1n1 : 5.98 99.13 6.20 0.00 0.00 1174895.13 36901.81 1768060.46 00:07:39.421 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.421 Verification LBA range: start 0x0 length 0x8000 00:07:39.421 Nvme2n1 : 5.92 112.61 7.04 0.00 0.00 1018309.57 82676.18 1135688.47 00:07:39.421 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.421 Verification LBA range: start 0x8000 length 0x8000 00:07:39.421 Nvme2n1 : 5.98 103.44 6.47 0.00 0.00 1095249.82 57268.38 1793871.56 00:07:39.421 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.421 Verification LBA range: start 0x0 length 0x8000 00:07:39.421 Nvme2n2 : 6.00 117.29 7.33 0.00 0.00 944011.42 75820.11 980821.86 00:07:39.421 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.421 Verification LBA range: start 0x8000 length 0x8000 00:07:39.421 Nvme2n2 : 6.09 108.45 6.78 0.00 0.00 1005876.78 57671.68 1832588.21 00:07:39.421 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.421 Verification LBA range: start 0x0 length 0x8000 00:07:39.421 Nvme2n3 : 6.14 124.22 7.76 0.00 0.00 857207.18 39926.55 1232480.10 00:07:39.421 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.421 Verification LBA range: start 0x8000 length 0x8000 00:07:39.421 Nvme2n3 : 6.14 115.84 7.24 0.00 0.00 908642.49 48597.46 1858399.31 00:07:39.421 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:39.421 Verification LBA range: start 0x0 length 0x2000 00:07:39.421 Nvme3n1 : 6.15 141.11 8.82 0.00 0.00 734251.94 1304.42 1245385.65 00:07:39.421 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:39.421 Verification LBA range: start 0x2000 length 0x2000 00:07:39.421 Nvme3n1 : 6.19 144.03 9.00 0.00 0.00 708700.35 1260.31 1703532.70 00:07:39.421 [2024-12-05T12:11:10.290Z] =================================================================================================================== 00:07:39.421 [2024-12-05T12:11:10.290Z] Total : 1380.27 86.27 0.00 0.00 970236.75 1260.31 1858399.31 00:07:41.332 00:07:41.332 real 0m8.809s 00:07:41.332 user 0m16.586s 00:07:41.332 sys 0m0.280s 00:07:41.332 12:11:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.332 ************************************ 00:07:41.332 END TEST bdev_verify_big_io 00:07:41.332 ************************************ 00:07:41.332 12:11:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:41.332 12:11:11 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.332 12:11:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:41.332 12:11:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.332 12:11:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.332 ************************************ 00:07:41.332 START TEST bdev_write_zeroes 00:07:41.332 ************************************ 00:07:41.332 12:11:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.332 [2024-12-05 12:11:11.875822] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:41.332 [2024-12-05 12:11:11.876419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60784 ] 00:07:41.332 [2024-12-05 12:11:12.035502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.332 [2024-12-05 12:11:12.150218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.902 Running I/O for 1 seconds... 00:07:43.285 51397.00 IOPS, 200.77 MiB/s 00:07:43.285 Latency(us) 00:07:43.285 [2024-12-05T12:11:14.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.285 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.285 Nvme0n1 : 1.21 7127.85 27.84 0.00 0.00 17046.99 6654.42 261337.40 00:07:43.285 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.285 Nvme1n1 : 1.15 7590.60 29.65 0.00 0.00 16045.19 10334.52 182290.90 00:07:43.285 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.286 Nvme2n1 : 1.15 7582.21 29.62 0.00 0.00 16003.85 10384.94 183097.50 00:07:43.286 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.286 Nvme2n2 : 1.15 7573.96 29.59 0.00 0.00 16425.73 10032.05 183097.50 00:07:43.286 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.286 Nvme2n3 : 1.15 7557.98 29.52 0.00 0.00 16755.35 9578.34 188743.68 00:07:43.286 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.286 Nvme3n1 : 1.15 7549.94 29.49 0.00 0.00 16726.04 8166.79 187937.08 00:07:43.286 [2024-12-05T12:11:14.155Z] =================================================================================================================== 00:07:43.286 [2024-12-05T12:11:14.155Z] Total : 44982.53 175.71 0.00 0.00 16499.91 6654.42 261337.40 00:07:44.228 00:07:44.228 real 0m3.103s 00:07:44.228 user 0m2.752s 00:07:44.228 sys 0m0.229s 00:07:44.228 12:11:14 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.228 ************************************ 00:07:44.228 END TEST bdev_write_zeroes 00:07:44.228 ************************************ 00:07:44.228 12:11:14 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:44.228 12:11:14 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.228 12:11:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:44.228 12:11:14 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.228 12:11:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.228 ************************************ 00:07:44.228 START TEST bdev_json_nonenclosed 00:07:44.228 ************************************ 00:07:44.228 12:11:14 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.228 [2024-12-05 12:11:15.033687] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:44.228 [2024-12-05 12:11:15.033822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60837 ] 00:07:44.489 [2024-12-05 12:11:15.195985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.489 [2024-12-05 12:11:15.319610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.489 [2024-12-05 12:11:15.319705] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:44.489 [2024-12-05 12:11:15.319724] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:44.489 [2024-12-05 12:11:15.319734] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.750 ************************************ 00:07:44.750 END TEST bdev_json_nonenclosed 00:07:44.750 ************************************ 00:07:44.750 00:07:44.750 real 0m0.541s 00:07:44.750 user 0m0.339s 00:07:44.750 sys 0m0.097s 00:07:44.750 12:11:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.750 12:11:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:44.750 12:11:15 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.750 12:11:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:44.750 12:11:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.750 12:11:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.750 ************************************ 00:07:44.750 START TEST bdev_json_nonarray 00:07:44.750 ************************************ 00:07:44.750 12:11:15 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.011 [2024-12-05 12:11:15.621290] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:45.011 [2024-12-05 12:11:15.621431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60868 ] 00:07:45.011 [2024-12-05 12:11:15.784706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.272 [2024-12-05 12:11:15.897822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.272 [2024-12-05 12:11:15.897931] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:45.272 [2024-12-05 12:11:15.897951] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:45.272 [2024-12-05 12:11:15.897960] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.272 00:07:45.272 real 0m0.527s 00:07:45.272 user 0m0.323s 00:07:45.272 sys 0m0.100s 00:07:45.272 12:11:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.272 ************************************ 00:07:45.272 END TEST bdev_json_nonarray 00:07:45.272 ************************************ 00:07:45.272 12:11:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:45.272 12:11:16 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:45.272 12:11:16 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:45.272 12:11:16 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:45.272 12:11:16 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:45.272 12:11:16 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:45.272 12:11:16 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:45.272 12:11:16 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.272 12:11:16 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:45.272 12:11:16 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:45.272 12:11:16 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:45.272 12:11:16 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:45.272 ************************************ 00:07:45.272 END TEST blockdev_nvme 00:07:45.272 ************************************ 00:07:45.272 00:07:45.272 real 0m45.398s 00:07:45.272 user 1m9.063s 00:07:45.272 sys 0m6.323s 00:07:45.272 12:11:16 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.272 12:11:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.532 12:11:16 -- spdk/autotest.sh@209 -- # uname -s 00:07:45.532 12:11:16 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:45.532 12:11:16 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:45.532 12:11:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.532 12:11:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.532 12:11:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.532 ************************************ 00:07:45.532 START TEST blockdev_nvme_gpt 00:07:45.532 ************************************ 00:07:45.532 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:45.532 * Looking for test storage... 00:07:45.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:45.532 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.532 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:45.532 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.532 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.532 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.533 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:45.533 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:45.533 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.533 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:45.533 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.533 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:45.533 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:45.533 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.533 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:45.533 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.533 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.533 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.533 12:11:16 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:45.533 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.533 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:45.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.533 --rc genhtml_branch_coverage=1 00:07:45.533 --rc genhtml_function_coverage=1 00:07:45.533 --rc genhtml_legend=1 00:07:45.533 --rc geninfo_all_blocks=1 00:07:45.533 --rc geninfo_unexecuted_blocks=1 00:07:45.533 00:07:45.533 ' 00:07:45.533 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:45.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.533 --rc genhtml_branch_coverage=1 00:07:45.533 --rc genhtml_function_coverage=1 00:07:45.533 --rc genhtml_legend=1 00:07:45.533 --rc geninfo_all_blocks=1 00:07:45.533 --rc geninfo_unexecuted_blocks=1 00:07:45.533 00:07:45.533 ' 00:07:45.533 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:45.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.533 --rc genhtml_branch_coverage=1 00:07:45.533 --rc genhtml_function_coverage=1 00:07:45.533 --rc genhtml_legend=1 00:07:45.533 --rc geninfo_all_blocks=1 00:07:45.533 --rc geninfo_unexecuted_blocks=1 00:07:45.533 00:07:45.533 ' 00:07:45.533 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:45.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.533 --rc genhtml_branch_coverage=1 00:07:45.533 --rc genhtml_function_coverage=1 00:07:45.533 --rc genhtml_legend=1 00:07:45.533 --rc geninfo_all_blocks=1 00:07:45.533 --rc geninfo_unexecuted_blocks=1 00:07:45.533 00:07:45.533 ' 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:45.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60946 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60946 00:07:45.533 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60946 ']' 00:07:45.533 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.533 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.533 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.533 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.533 12:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:45.533 12:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:45.792 [2024-12-05 12:11:16.429033] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:45.792 [2024-12-05 12:11:16.429161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60946 ] 00:07:45.792 [2024-12-05 12:11:16.591130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.049 [2024-12-05 12:11:16.702531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.614 12:11:17 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.614 12:11:17 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:46.614 12:11:17 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:46.614 12:11:17 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:46.614 12:11:17 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:46.872 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:47.129 Waiting for block devices as requested 00:07:47.129 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.129 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.129 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.129 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:52.404 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:52.404 BYT; 00:07:52.404 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:52.404 BYT; 00:07:52.404 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:52.404 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:52.404 12:11:23 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:52.405 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:52.405 12:11:23 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:53.372 The operation has completed successfully. 00:07:53.372 12:11:24 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:54.756 The operation has completed successfully. 00:07:54.756 12:11:25 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:55.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:55.587 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.587 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.587 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.587 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.587 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:55.587 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.587 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.587 [] 00:07:55.587 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.587 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:55.587 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:55.587 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:55.587 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:55.587 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:55.587 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.587 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.847 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.847 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:55.847 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.847 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.847 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.847 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:55.847 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:55.847 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.847 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.110 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.110 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:56.110 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.110 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.110 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.110 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:56.110 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.110 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.110 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.110 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:56.110 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:56.110 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:56.110 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.110 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.110 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.110 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:56.110 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:56.111 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "22b9e7f3-20fe-48ef-a1aa-916edb4f18bf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "22b9e7f3-20fe-48ef-a1aa-916edb4f18bf",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "747f6723-4089-4405-9cd1-bbe3954162bb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "747f6723-4089-4405-9cd1-bbe3954162bb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f434c27d-71fc-4827-899a-7ad684de7357"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f434c27d-71fc-4827-899a-7ad684de7357",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f30afc0f-9ea1-41c5-b8b8-fa04a82f5539"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f30afc0f-9ea1-41c5-b8b8-fa04a82f5539",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3535b907-daf9-4640-a47c-fd23317c2197"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3535b907-daf9-4640-a47c-fd23317c2197",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:56.111 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:56.111 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:56.111 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:56.111 12:11:26 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60946 00:07:56.111 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60946 ']' 00:07:56.111 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60946 00:07:56.111 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:56.111 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.111 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60946 00:07:56.111 killing process with pid 60946 00:07:56.111 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.111 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.111 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60946' 00:07:56.111 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60946 00:07:56.111 12:11:26 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60946 00:07:58.026 12:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:58.026 12:11:28 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:58.026 12:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:58.026 12:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.026 12:11:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:58.026 ************************************ 00:07:58.026 START TEST bdev_hello_world 00:07:58.026 ************************************ 00:07:58.026 12:11:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:58.026 [2024-12-05 12:11:28.444826] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:58.026 [2024-12-05 12:11:28.444945] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61566 ] 00:07:58.026 [2024-12-05 12:11:28.604999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.026 [2024-12-05 12:11:28.725046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.598 [2024-12-05 12:11:29.302032] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:58.598 [2024-12-05 12:11:29.302146] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:58.598 [2024-12-05 12:11:29.302211] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:58.598 [2024-12-05 12:11:29.307735] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:58.598 [2024-12-05 12:11:29.308604] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:58.598 [2024-12-05 12:11:29.308634] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:58.598 [2024-12-05 12:11:29.309187] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:58.598 00:07:58.598 [2024-12-05 12:11:29.309213] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:59.542 00:07:59.542 real 0m1.833s 00:07:59.542 user 0m1.497s 00:07:59.542 sys 0m0.228s 00:07:59.542 12:11:30 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.542 ************************************ 00:07:59.542 END TEST bdev_hello_world 00:07:59.542 ************************************ 00:07:59.542 12:11:30 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:59.542 12:11:30 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:59.542 12:11:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:59.542 12:11:30 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.542 12:11:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:59.542 ************************************ 00:07:59.542 START TEST bdev_bounds 00:07:59.542 ************************************ 00:07:59.542 12:11:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:59.542 12:11:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61608 00:07:59.542 Process bdevio pid: 61608 00:07:59.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.542 12:11:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:59.542 12:11:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61608' 00:07:59.542 12:11:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61608 00:07:59.542 12:11:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61608 ']' 00:07:59.542 12:11:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.542 12:11:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.542 12:11:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.542 12:11:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.542 12:11:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:59.542 12:11:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:59.542 [2024-12-05 12:11:30.350687] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:59.542 [2024-12-05 12:11:30.350821] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61608 ] 00:07:59.802 [2024-12-05 12:11:30.514109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.802 [2024-12-05 12:11:30.633948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.802 [2024-12-05 12:11:30.634205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.802 [2024-12-05 12:11:30.634265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.375 12:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.375 12:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:00.375 12:11:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:00.636 I/O targets: 00:08:00.636 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:00.636 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:00.636 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:00.636 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:00.636 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:00.636 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:00.636 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:00.636 00:08:00.636 00:08:00.636 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.636 http://cunit.sourceforge.net/ 00:08:00.636 00:08:00.636 00:08:00.636 Suite: bdevio tests on: Nvme3n1 00:08:00.636 Test: blockdev write read block ...passed 00:08:00.636 Test: blockdev write zeroes read block ...passed 00:08:00.636 Test: blockdev write zeroes read no split ...passed 00:08:00.636 Test: blockdev write zeroes read split ...passed 00:08:00.636 Test: blockdev write zeroes read split partial ...passed 00:08:00.636 Test: blockdev reset ...[2024-12-05 12:11:31.384760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:00.636 passed 00:08:00.636 Test: blockdev write read 8 blocks ...[2024-12-05 12:11:31.389372] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:00.636 passed 00:08:00.636 Test: blockdev write read size > 128k ...passed 00:08:00.636 Test: blockdev write read invalid size ...passed 00:08:00.636 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.636 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.636 Test: blockdev write read max offset ...passed 00:08:00.636 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.636 Test: blockdev writev readv 8 blocks ...passed 00:08:00.636 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.636 Test: blockdev writev readv block ...passed 00:08:00.636 Test: blockdev writev readv size > 128k ...passed 00:08:00.636 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.636 Test: blockdev comparev and writev ...[2024-12-05 12:11:31.406667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7c04000 len:0x1000 00:08:00.636 [2024-12-05 12:11:31.406735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.636 passed 00:08:00.636 Test: blockdev nvme passthru rw ...passed 00:08:00.636 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.636 Test: blockdev nvme admin passthru ...[2024-12-05 12:11:31.408999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.636 [2024-12-05 12:11:31.409032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.636 passed 00:08:00.636 Test: blockdev copy ...passed 00:08:00.636 Suite: bdevio tests on: Nvme2n3 00:08:00.636 Test: blockdev write read block ...passed 00:08:00.636 Test: blockdev write zeroes read block ...passed 00:08:00.636 Test: blockdev write zeroes read no split ...passed 00:08:00.636 Test: blockdev write zeroes read split ...passed 00:08:00.636 Test: blockdev write zeroes read split partial ...passed 00:08:00.636 Test: blockdev reset ...[2024-12-05 12:11:31.468553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:00.636 [2024-12-05 12:11:31.473593] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:08:00.636 00:08:00.636 Test: blockdev write read 8 blocks ...passed 00:08:00.636 Test: blockdev write read size > 128k ...passed 00:08:00.636 Test: blockdev write read invalid size ...passed 00:08:00.636 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.636 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.637 Test: blockdev write read max offset ...passed 00:08:00.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.637 Test: blockdev writev readv 8 blocks ...passed 00:08:00.637 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.637 Test: blockdev writev readv block ...passed 00:08:00.637 Test: blockdev writev readv size > 128k ...passed 00:08:00.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.637 Test: blockdev comparev and writev ...[2024-12-05 12:11:31.491220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7c02000 len:0x1000 00:08:00.637 [2024-12-05 12:11:31.491282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.637 passed 00:08:00.637 Test: blockdev nvme passthru rw ...passed 00:08:00.637 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.637 Test: blockdev nvme admin passthru ...[2024-12-05 12:11:31.493287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.637 [2024-12-05 12:11:31.493319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.637 passed 00:08:00.637 Test: blockdev copy ...passed 00:08:00.637 Suite: bdevio tests on: Nvme2n2 00:08:00.637 Test: blockdev write read block ...passed 00:08:00.897 Test: blockdev write zeroes read block ...passed 00:08:00.897 Test: blockdev write zeroes read no split ...passed 00:08:00.897 Test: blockdev write zeroes read split ...passed 00:08:00.897 Test: blockdev write zeroes read split partial ...passed 00:08:00.898 Test: blockdev reset ...[2024-12-05 12:11:31.550762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:00.898 [2024-12-05 12:11:31.555526] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:00.898 Test: blockdev write read 8 blocks ...uccessful. 00:08:00.898 passed 00:08:00.898 Test: blockdev write read size > 128k ...passed 00:08:00.898 Test: blockdev write read invalid size ...passed 00:08:00.898 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.898 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.898 Test: blockdev write read max offset ...passed 00:08:00.898 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.898 Test: blockdev writev readv 8 blocks ...passed 00:08:00.898 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.898 Test: blockdev writev readv block ...passed 00:08:00.898 Test: blockdev writev readv size > 128k ...passed 00:08:00.898 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.898 Test: blockdev comparev and writev ...[2024-12-05 12:11:31.572570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b2838000 len:0x1000 00:08:00.898 [2024-12-05 12:11:31.572620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.898 passed 00:08:00.898 Test: blockdev nvme passthru rw ...passed 00:08:00.898 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:11:31.574611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:08:00.898 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:08:00.898 [2024-12-05 12:11:31.574762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.898 passed 00:08:00.898 Test: blockdev copy ...passed 00:08:00.898 Suite: bdevio tests on: Nvme2n1 00:08:00.898 Test: blockdev write read block ...passed 00:08:00.898 Test: blockdev write zeroes read block ...passed 00:08:00.898 Test: blockdev write zeroes read no split ...passed 00:08:00.898 Test: blockdev write zeroes read split ...passed 00:08:00.898 Test: blockdev write zeroes read split partial ...passed 00:08:00.898 Test: blockdev reset ...[2024-12-05 12:11:31.633757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:00.898 [2024-12-05 12:11:31.638596] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:00.898 Test: blockdev write read 8 blocks ...uccessful. 00:08:00.898 passed 00:08:00.898 Test: blockdev write read size > 128k ...passed 00:08:00.898 Test: blockdev write read invalid size ...passed 00:08:00.898 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.898 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.898 Test: blockdev write read max offset ...passed 00:08:00.898 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.898 Test: blockdev writev readv 8 blocks ...passed 00:08:00.898 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.898 Test: blockdev writev readv block ...passed 00:08:00.898 Test: blockdev writev readv size > 128k ...passed 00:08:00.898 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.898 Test: blockdev comparev and writev ...[2024-12-05 12:11:31.654031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:08:00.898 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b2834000 len:0x1000 00:08:00.898 [2024-12-05 12:11:31.654216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.898 passed 00:08:00.898 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.898 Test: blockdev nvme admin passthru ...[2024-12-05 12:11:31.656185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.898 [2024-12-05 12:11:31.656221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.898 passed 00:08:00.898 Test: blockdev copy ...passed 00:08:00.898 Suite: bdevio tests on: Nvme1n1p2 00:08:00.898 Test: blockdev write read block ...passed 00:08:00.898 Test: blockdev write zeroes read block ...passed 00:08:00.898 Test: blockdev write zeroes read no split ...passed 00:08:00.898 Test: blockdev write zeroes read split ...passed 00:08:00.898 Test: blockdev write zeroes read split partial ...passed 00:08:00.898 Test: blockdev reset ...[2024-12-05 12:11:31.714406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:00.898 [2024-12-05 12:11:31.718655] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spasseduccessful. 00:08:00.898 00:08:00.898 Test: blockdev write read 8 blocks ...passed 00:08:00.898 Test: blockdev write read size > 128k ...passed 00:08:00.898 Test: blockdev write read invalid size ...passed 00:08:00.898 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.898 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.898 Test: blockdev write read max offset ...passed 00:08:00.898 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.898 Test: blockdev writev readv 8 blocks ...passed 00:08:00.898 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.898 Test: blockdev writev readv block ...passed 00:08:00.898 Test: blockdev writev readv size > 128k ...passed 00:08:00.898 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.898 Test: blockdev comparev and writev ...[2024-12-05 12:11:31.735314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:08:00.898 Test: blockdev nvme passthru rw ...passed 00:08:00.898 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.898 Test: blockdev nvme admin passthru ...passed 00:08:00.898 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2b2830000 len:0x1000 00:08:00.898 [2024-12-05 12:11:31.735522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.898 passed 00:08:00.898 Suite: bdevio tests on: Nvme1n1p1 00:08:00.898 Test: blockdev write read block ...passed 00:08:00.898 Test: blockdev write zeroes read block ...passed 00:08:00.898 Test: blockdev write zeroes read no split ...passed 00:08:01.158 Test: blockdev write zeroes read split ...passed 00:08:01.158 Test: blockdev write zeroes read split partial ...passed 00:08:01.158 Test: blockdev reset ...[2024-12-05 12:11:31.787537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:01.158 [2024-12-05 12:11:31.791887] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:08:01.158 Test: blockdev write read 8 blocks ...uccessful. 00:08:01.158 passed 00:08:01.158 Test: blockdev write read size > 128k ...passed 00:08:01.158 Test: blockdev write read invalid size ...passed 00:08:01.158 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.158 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.158 Test: blockdev write read max offset ...passed 00:08:01.158 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.158 Test: blockdev writev readv 8 blocks ...passed 00:08:01.158 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.158 Test: blockdev writev readv block ...passed 00:08:01.158 Test: blockdev writev readv size > 128k ...passed 00:08:01.159 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.159 Test: blockdev comparev and writev ...[2024-12-05 12:11:31.811878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b860e000 len:0x1000 00:08:01.159 [2024-12-05 12:11:31.812059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:08:01.159 Test: blockdev nvme passthru rw ...passed 00:08:01.159 Test: blockdev nvme passthru vendor specific ...passed 00:08:01.159 Test: blockdev nvme admin passthru ...passed 00:08:01.159 Test: blockdev copy ...0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.159 passed 00:08:01.159 Suite: bdevio tests on: Nvme0n1 00:08:01.159 Test: blockdev write read block ...passed 00:08:01.159 Test: blockdev write zeroes read block ...passed 00:08:01.159 Test: blockdev write zeroes read no split ...passed 00:08:01.159 Test: blockdev write zeroes read split ...passed 00:08:01.159 Test: blockdev write zeroes read split partial ...passed 00:08:01.159 Test: blockdev reset ...[2024-12-05 12:11:31.863405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:01.159 [2024-12-05 12:11:31.868664] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spasseduccessful. 00:08:01.159 00:08:01.159 Test: blockdev write read 8 blocks ...passed 00:08:01.159 Test: blockdev write read size > 128k ...passed 00:08:01.159 Test: blockdev write read invalid size ...passed 00:08:01.159 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.159 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.159 Test: blockdev write read max offset ...passed 00:08:01.159 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.159 Test: blockdev writev readv 8 blocks ...passed 00:08:01.159 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.159 Test: blockdev writev readv block ...passed 00:08:01.159 Test: blockdev writev readv size > 128k ...passed 00:08:01.159 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.159 Test: blockdev comparev and writev ...passed 00:08:01.159 Test: blockdev nvme passthru rw ...[2024-12-05 12:11:31.884791] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:01.159 separate metadata which is not supported yet. 00:08:01.159 passed 00:08:01.159 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:11:31.886372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:08:01.159 Test: blockdev nvme admin passthru ...RP2 0x0 00:08:01.159 [2024-12-05 12:11:31.886557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:01.159 passed 00:08:01.159 Test: blockdev copy ...passed 00:08:01.159 00:08:01.159 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.159 suites 7 7 n/a 0 0 00:08:01.159 tests 161 161 161 0 0 00:08:01.159 asserts 1025 1025 1025 0 n/a 00:08:01.159 00:08:01.159 Elapsed time = 1.412 seconds 00:08:01.159 0 00:08:01.159 12:11:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61608 00:08:01.159 12:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61608 ']' 00:08:01.159 12:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61608 00:08:01.159 12:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:01.159 12:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.159 12:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61608 00:08:01.159 12:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.159 12:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.159 killing process with pid 61608 00:08:01.159 12:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61608' 00:08:01.159 12:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61608 00:08:01.159 12:11:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61608 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:02.099 00:08:02.099 real 0m2.394s 00:08:02.099 user 0m5.963s 00:08:02.099 sys 0m0.345s 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:02.099 ************************************ 00:08:02.099 END TEST bdev_bounds 00:08:02.099 ************************************ 00:08:02.099 12:11:32 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:02.099 12:11:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:02.099 12:11:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.099 12:11:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:02.099 ************************************ 00:08:02.099 START TEST bdev_nbd 00:08:02.099 ************************************ 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61662 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61662 /var/tmp/spdk-nbd.sock 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61662 ']' 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:02.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.099 12:11:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:02.099 [2024-12-05 12:11:32.813763] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:02.099 [2024-12-05 12:11:32.814023] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.360 [2024-12-05 12:11:32.973870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.360 [2024-12-05 12:11:33.092029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.932 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.932 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:02.932 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:02.932 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.932 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.932 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:02.932 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:02.932 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.932 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.932 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:02.932 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:02.933 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:02.933 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:02.933 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:02.933 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.194 1+0 records in 00:08:03.194 1+0 records out 00:08:03.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000980511 s, 4.2 MB/s 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.194 12:11:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.456 1+0 records in 00:08:03.456 1+0 records out 00:08:03.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731983 s, 5.6 MB/s 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.456 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.717 1+0 records in 00:08:03.717 1+0 records out 00:08:03.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776366 s, 5.3 MB/s 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.717 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.978 1+0 records in 00:08:03.978 1+0 records out 00:08:03.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789803 s, 5.2 MB/s 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.978 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.236 1+0 records in 00:08:04.236 1+0 records out 00:08:04.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115844 s, 3.5 MB/s 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.236 12:11:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.495 1+0 records in 00:08:04.495 1+0 records out 00:08:04.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386861 s, 10.6 MB/s 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.495 1+0 records in 00:08:04.495 1+0 records out 00:08:04.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621907 s, 6.6 MB/s 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.495 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.753 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.753 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:04.753 { 00:08:04.753 "nbd_device": "/dev/nbd0", 00:08:04.753 "bdev_name": "Nvme0n1" 00:08:04.753 }, 00:08:04.753 { 00:08:04.753 "nbd_device": "/dev/nbd1", 00:08:04.753 "bdev_name": "Nvme1n1p1" 00:08:04.753 }, 00:08:04.753 { 00:08:04.753 "nbd_device": "/dev/nbd2", 00:08:04.753 "bdev_name": "Nvme1n1p2" 00:08:04.753 }, 00:08:04.753 { 00:08:04.753 "nbd_device": "/dev/nbd3", 00:08:04.753 "bdev_name": "Nvme2n1" 00:08:04.753 }, 00:08:04.753 { 00:08:04.753 "nbd_device": "/dev/nbd4", 00:08:04.753 "bdev_name": "Nvme2n2" 00:08:04.753 }, 00:08:04.753 { 00:08:04.753 "nbd_device": "/dev/nbd5", 00:08:04.753 "bdev_name": "Nvme2n3" 00:08:04.753 }, 00:08:04.753 { 00:08:04.753 "nbd_device": "/dev/nbd6", 00:08:04.753 "bdev_name": "Nvme3n1" 00:08:04.753 } 00:08:04.753 ]' 00:08:04.753 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:04.753 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:04.753 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:04.753 { 00:08:04.753 "nbd_device": "/dev/nbd0", 00:08:04.753 "bdev_name": "Nvme0n1" 00:08:04.753 }, 00:08:04.753 { 00:08:04.753 "nbd_device": "/dev/nbd1", 00:08:04.753 "bdev_name": "Nvme1n1p1" 00:08:04.753 }, 00:08:04.753 { 00:08:04.753 "nbd_device": "/dev/nbd2", 00:08:04.753 "bdev_name": "Nvme1n1p2" 00:08:04.753 }, 00:08:04.753 { 00:08:04.753 "nbd_device": "/dev/nbd3", 00:08:04.753 "bdev_name": "Nvme2n1" 00:08:04.753 }, 00:08:04.753 { 00:08:04.753 "nbd_device": "/dev/nbd4", 00:08:04.753 "bdev_name": "Nvme2n2" 00:08:04.753 }, 00:08:04.753 { 00:08:04.753 "nbd_device": "/dev/nbd5", 00:08:04.753 "bdev_name": "Nvme2n3" 00:08:04.753 }, 00:08:04.753 { 00:08:04.753 "nbd_device": "/dev/nbd6", 00:08:04.753 "bdev_name": "Nvme3n1" 00:08:04.753 } 00:08:04.753 ]' 00:08:04.753 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:04.753 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.753 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:04.753 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:04.753 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:04.753 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.753 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:05.011 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:05.011 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:05.011 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:05.011 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.011 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.011 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:05.011 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.011 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.011 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.011 12:11:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:05.318 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:05.318 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:05.318 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:05.318 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.318 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.318 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:05.318 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.318 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.318 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.318 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.576 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:05.835 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:05.835 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:05.835 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:05.835 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.835 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.835 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:05.835 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.835 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.835 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.835 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:06.094 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:06.094 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:06.094 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:06.094 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.094 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.094 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:06.094 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.094 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.094 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.094 12:11:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:06.354 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:06.354 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:06.354 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:06.354 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.354 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.354 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:06.354 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.354 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.354 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.354 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.354 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:06.614 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:06.874 /dev/nbd0 00:08:06.874 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:06.874 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:06.874 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:06.874 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:06.874 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:06.874 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:06.874 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:06.874 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:06.874 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:06.874 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:06.875 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.875 1+0 records in 00:08:06.875 1+0 records out 00:08:06.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000753273 s, 5.4 MB/s 00:08:06.875 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.875 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:06.875 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.875 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:06.875 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:06.875 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.875 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:06.875 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:07.136 /dev/nbd1 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.136 1+0 records in 00:08:07.136 1+0 records out 00:08:07.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101245 s, 4.0 MB/s 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:07.136 12:11:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:07.397 /dev/nbd10 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.398 1+0 records in 00:08:07.398 1+0 records out 00:08:07.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796384 s, 5.1 MB/s 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:07.398 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:07.398 /dev/nbd11 00:08:07.658 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:07.658 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:07.658 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:07.658 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:07.658 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.658 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.658 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:07.658 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:07.658 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.658 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.658 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.658 1+0 records in 00:08:07.659 1+0 records out 00:08:07.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000818296 s, 5.0 MB/s 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:07.659 /dev/nbd12 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.659 1+0 records in 00:08:07.659 1+0 records out 00:08:07.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000955163 s, 4.3 MB/s 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:07.659 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:07.920 /dev/nbd13 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.920 1+0 records in 00:08:07.920 1+0 records out 00:08:07.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000841527 s, 4.9 MB/s 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:07.920 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:08.182 /dev/nbd14 00:08:08.182 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:08.182 12:11:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:08.182 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:08:08.182 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.182 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.182 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.182 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:08:08.182 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.182 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.182 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.182 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.182 1+0 records in 00:08:08.182 1+0 records out 00:08:08.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113751 s, 3.6 MB/s 00:08:08.182 12:11:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.182 12:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.182 12:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.182 12:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.182 12:11:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.182 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.182 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.182 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.182 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.182 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:08.444 { 00:08:08.444 "nbd_device": "/dev/nbd0", 00:08:08.444 "bdev_name": "Nvme0n1" 00:08:08.444 }, 00:08:08.444 { 00:08:08.444 "nbd_device": "/dev/nbd1", 00:08:08.444 "bdev_name": "Nvme1n1p1" 00:08:08.444 }, 00:08:08.444 { 00:08:08.444 "nbd_device": "/dev/nbd10", 00:08:08.444 "bdev_name": "Nvme1n1p2" 00:08:08.444 }, 00:08:08.444 { 00:08:08.444 "nbd_device": "/dev/nbd11", 00:08:08.444 "bdev_name": "Nvme2n1" 00:08:08.444 }, 00:08:08.444 { 00:08:08.444 "nbd_device": "/dev/nbd12", 00:08:08.444 "bdev_name": "Nvme2n2" 00:08:08.444 }, 00:08:08.444 { 00:08:08.444 "nbd_device": "/dev/nbd13", 00:08:08.444 "bdev_name": "Nvme2n3" 00:08:08.444 }, 00:08:08.444 { 00:08:08.444 "nbd_device": "/dev/nbd14", 00:08:08.444 "bdev_name": "Nvme3n1" 00:08:08.444 } 00:08:08.444 ]' 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:08.444 { 00:08:08.444 "nbd_device": "/dev/nbd0", 00:08:08.444 "bdev_name": "Nvme0n1" 00:08:08.444 }, 00:08:08.444 { 00:08:08.444 "nbd_device": "/dev/nbd1", 00:08:08.444 "bdev_name": "Nvme1n1p1" 00:08:08.444 }, 00:08:08.444 { 00:08:08.444 "nbd_device": "/dev/nbd10", 00:08:08.444 "bdev_name": "Nvme1n1p2" 00:08:08.444 }, 00:08:08.444 { 00:08:08.444 "nbd_device": "/dev/nbd11", 00:08:08.444 "bdev_name": "Nvme2n1" 00:08:08.444 }, 00:08:08.444 { 00:08:08.444 "nbd_device": "/dev/nbd12", 00:08:08.444 "bdev_name": "Nvme2n2" 00:08:08.444 }, 00:08:08.444 { 00:08:08.444 "nbd_device": "/dev/nbd13", 00:08:08.444 "bdev_name": "Nvme2n3" 00:08:08.444 }, 00:08:08.444 { 00:08:08.444 "nbd_device": "/dev/nbd14", 00:08:08.444 "bdev_name": "Nvme3n1" 00:08:08.444 } 00:08:08.444 ]' 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:08.444 /dev/nbd1 00:08:08.444 /dev/nbd10 00:08:08.444 /dev/nbd11 00:08:08.444 /dev/nbd12 00:08:08.444 /dev/nbd13 00:08:08.444 /dev/nbd14' 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:08.444 /dev/nbd1 00:08:08.444 /dev/nbd10 00:08:08.444 /dev/nbd11 00:08:08.444 /dev/nbd12 00:08:08.444 /dev/nbd13 00:08:08.444 /dev/nbd14' 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:08.444 256+0 records in 00:08:08.444 256+0 records out 00:08:08.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540081 s, 194 MB/s 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.444 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:08.706 256+0 records in 00:08:08.706 256+0 records out 00:08:08.706 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175881 s, 6.0 MB/s 00:08:08.706 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.706 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:08.967 256+0 records in 00:08:08.967 256+0 records out 00:08:08.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.194218 s, 5.4 MB/s 00:08:08.967 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.967 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:09.227 256+0 records in 00:08:09.227 256+0 records out 00:08:09.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.337797 s, 3.1 MB/s 00:08:09.227 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.227 12:11:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:09.227 256+0 records in 00:08:09.227 256+0 records out 00:08:09.227 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114735 s, 9.1 MB/s 00:08:09.227 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.227 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:09.489 256+0 records in 00:08:09.489 256+0 records out 00:08:09.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205117 s, 5.1 MB/s 00:08:09.489 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.489 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:09.751 256+0 records in 00:08:09.751 256+0 records out 00:08:09.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.26522 s, 4.0 MB/s 00:08:09.751 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.751 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:10.011 256+0 records in 00:08:10.011 256+0 records out 00:08:10.011 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117039 s, 9.0 MB/s 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.011 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:10.012 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.012 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:10.012 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.012 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:10.012 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.012 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:10.012 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:10.012 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:10.012 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.012 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:10.012 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:10.012 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:10.012 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.012 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:10.272 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:10.272 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:10.272 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:10.272 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.272 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.272 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:10.272 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.272 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.272 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.272 12:11:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.592 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:10.855 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:10.855 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:10.855 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:10.855 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.855 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.855 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:10.855 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.855 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.855 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.855 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:11.117 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:11.117 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:11.117 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:11.117 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.117 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.117 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:11.117 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.117 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.117 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.117 12:11:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.378 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:11.639 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:11.640 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:11.901 malloc_lvol_verify 00:08:11.901 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:12.162 96e776ed-e969-40db-8ba9-e45294115250 00:08:12.162 12:11:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:12.422 c66ba1c7-3e2c-45a6-bfd2-6566445554a7 00:08:12.423 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:12.684 /dev/nbd0 00:08:12.684 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:12.684 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:12.684 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:12.684 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:12.684 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:12.684 mke2fs 1.47.0 (5-Feb-2023) 00:08:12.684 Discarding device blocks: 0/4096 done 00:08:12.684 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:12.684 00:08:12.684 Allocating group tables: 0/1 done 00:08:12.684 Writing inode tables: 0/1 done 00:08:12.684 Creating journal (1024 blocks): done 00:08:12.684 Writing superblocks and filesystem accounting information: 0/1 done 00:08:12.684 00:08:12.684 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:12.684 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.684 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:12.684 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:12.684 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:12.684 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.684 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:12.944 killing process with pid 61662 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61662 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61662 ']' 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61662 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61662 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.944 12:11:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61662' 00:08:12.945 12:11:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61662 00:08:12.945 12:11:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61662 00:08:17.217 12:11:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:17.217 00:08:17.217 real 0m14.621s 00:08:17.217 user 0m18.155s 00:08:17.217 sys 0m4.585s 00:08:17.217 12:11:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.217 ************************************ 00:08:17.217 END TEST bdev_nbd 00:08:17.217 12:11:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:17.217 ************************************ 00:08:17.217 skipping fio tests on NVMe due to multi-ns failures. 00:08:17.217 12:11:47 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:17.217 12:11:47 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:08:17.217 12:11:47 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:08:17.217 12:11:47 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:17.217 12:11:47 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:17.217 12:11:47 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:17.217 12:11:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:17.217 12:11:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.217 12:11:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:17.217 ************************************ 00:08:17.217 START TEST bdev_verify 00:08:17.217 ************************************ 00:08:17.217 12:11:47 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:17.217 [2024-12-05 12:11:47.491704] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:17.217 [2024-12-05 12:11:47.491830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62088 ] 00:08:17.217 [2024-12-05 12:11:47.653946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:17.217 [2024-12-05 12:11:47.768445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.217 [2024-12-05 12:11:47.768528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.819 Running I/O for 5 seconds... 00:08:19.703 20288.00 IOPS, 79.25 MiB/s [2024-12-05T12:11:51.958Z] 19791.50 IOPS, 77.31 MiB/s [2024-12-05T12:11:52.891Z] 19289.33 IOPS, 75.35 MiB/s [2024-12-05T12:11:53.821Z] 19566.00 IOPS, 76.43 MiB/s [2024-12-05T12:11:53.821Z] 19974.60 IOPS, 78.03 MiB/s 00:08:22.952 Latency(us) 00:08:22.952 [2024-12-05T12:11:53.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.952 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.952 Verification LBA range: start 0x0 length 0xbd0bd 00:08:22.952 Nvme0n1 : 5.07 1387.86 5.42 0.00 0.00 91869.71 18551.73 145187.45 00:08:22.952 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.952 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:22.952 Nvme0n1 : 5.08 1427.83 5.58 0.00 0.00 89117.71 13107.20 147607.24 00:08:22.952 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.952 Verification LBA range: start 0x0 length 0x4ff80 00:08:22.952 Nvme1n1p1 : 5.09 1384.24 5.41 0.00 0.00 91597.90 8570.09 168578.76 00:08:22.952 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.952 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:22.952 Nvme1n1p1 : 5.06 1428.17 5.58 0.00 0.00 89120.95 11846.89 150833.62 00:08:22.952 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.952 Verification LBA range: start 0x0 length 0x4ff7f 00:08:22.952 Nvme1n1p2 : 5.09 1383.33 5.40 0.00 0.00 91460.14 10788.23 165352.37 00:08:22.952 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.952 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:22.952 Nvme1n1p2 : 5.08 1435.37 5.61 0.00 0.00 88567.63 16131.94 153253.42 00:08:22.952 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.952 Verification LBA range: start 0x0 length 0x80000 00:08:22.952 Nvme2n1 : 5.10 1388.61 5.42 0.00 0.00 91107.83 5873.03 162932.58 00:08:22.952 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.952 Verification LBA range: start 0x80000 length 0x80000 00:08:22.952 Nvme2n1 : 5.08 1423.78 5.56 0.00 0.00 89003.63 23492.14 175838.13 00:08:22.952 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.952 Verification LBA range: start 0x0 length 0x80000 00:08:22.952 Nvme2n2 : 5.11 1387.29 5.42 0.00 0.00 90947.14 5494.94 162125.98 00:08:22.952 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.952 Verification LBA range: start 0x80000 length 0x80000 00:08:22.952 Nvme2n2 : 5.08 1424.25 5.56 0.00 0.00 88728.22 3251.59 171805.14 00:08:22.952 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.952 Verification LBA range: start 0x0 length 0x80000 00:08:22.952 Nvme2n3 : 5.11 1390.14 5.43 0.00 0.00 90712.43 7461.02 162125.98 00:08:22.952 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.952 Verification LBA range: start 0x80000 length 0x80000 00:08:22.952 Nvme2n3 : 5.09 1433.80 5.60 0.00 0.00 88036.65 2129.92 169385.35 00:08:22.952 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.952 Verification LBA range: start 0x0 length 0x20000 00:08:22.952 Nvme3n1 : 5.11 1388.05 5.42 0.00 0.00 90620.72 7057.72 162932.58 00:08:22.952 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.952 Verification LBA range: start 0x20000 length 0x20000 00:08:22.952 Nvme3n1 : 5.09 1433.21 5.60 0.00 0.00 87840.32 3327.21 168578.76 00:08:22.952 [2024-12-05T12:11:53.821Z] =================================================================================================================== 00:08:22.952 [2024-12-05T12:11:53.821Z] Total : 19715.93 77.02 0.00 0.00 89890.48 2129.92 175838.13 00:08:24.320 00:08:24.320 real 0m7.503s 00:08:24.320 user 0m13.983s 00:08:24.320 sys 0m0.256s 00:08:24.320 12:11:54 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.320 12:11:54 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:24.320 ************************************ 00:08:24.320 END TEST bdev_verify 00:08:24.320 ************************************ 00:08:24.320 12:11:54 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:24.320 12:11:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:24.320 12:11:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.320 12:11:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:24.320 ************************************ 00:08:24.320 START TEST bdev_verify_big_io 00:08:24.320 ************************************ 00:08:24.320 12:11:54 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:24.320 [2024-12-05 12:11:55.041938] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:24.320 [2024-12-05 12:11:55.042093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62186 ] 00:08:24.576 [2024-12-05 12:11:55.201432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:24.576 [2024-12-05 12:11:55.279368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.576 [2024-12-05 12:11:55.279380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.139 Running I/O for 5 seconds... 00:08:30.249 1968.00 IOPS, 123.00 MiB/s [2024-12-05T12:12:02.053Z] 1879.00 IOPS, 117.44 MiB/s [2024-12-05T12:12:02.311Z] 2301.67 IOPS, 143.85 MiB/s [2024-12-05T12:12:02.311Z] 2277.75 IOPS, 142.36 MiB/s 00:08:31.442 Latency(us) 00:08:31.442 [2024-12-05T12:12:02.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.442 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.442 Verification LBA range: start 0x0 length 0xbd0b 00:08:31.442 Nvme0n1 : 6.00 64.03 4.00 0.00 0.00 1866907.04 10132.87 2619826.81 00:08:31.442 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.442 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:31.442 Nvme0n1 : 6.01 79.90 4.99 0.00 0.00 1511452.88 17644.31 2000360.37 00:08:31.442 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.442 Verification LBA range: start 0x0 length 0x4ff8 00:08:31.442 Nvme1n1p1 : 6.00 89.35 5.58 0.00 0.00 1305289.77 120182.94 1316366.18 00:08:31.442 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.442 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:31.442 Nvme1n1p1 : 5.88 109.43 6.84 0.00 0.00 1079799.33 100421.32 1064707.94 00:08:31.442 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.442 Verification LBA range: start 0x0 length 0x4ff7 00:08:31.442 Nvme1n1p2 : 6.08 94.70 5.92 0.00 0.00 1196503.22 80256.39 1245385.65 00:08:31.442 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.442 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:31.442 Nvme1n1p2 : 5.77 110.97 6.94 0.00 0.00 1046372.35 120989.54 1000180.18 00:08:31.442 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.442 Verification LBA range: start 0x0 length 0x8000 00:08:31.442 Nvme2n1 : 6.12 91.44 5.72 0.00 0.00 1194515.49 38515.00 2594015.70 00:08:31.442 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.442 Verification LBA range: start 0x8000 length 0x8000 00:08:31.442 Nvme2n1 : 5.88 113.48 7.09 0.00 0.00 988278.95 106470.79 1109877.37 00:08:31.442 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.442 Verification LBA range: start 0x0 length 0x8000 00:08:31.442 Nvme2n2 : 6.15 94.79 5.92 0.00 0.00 1102406.66 23794.61 2632732.36 00:08:31.442 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.442 Verification LBA range: start 0x8000 length 0x8000 00:08:31.442 Nvme2n2 : 6.01 124.12 7.76 0.00 0.00 883810.20 55251.89 1122782.92 00:08:31.442 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.442 Verification LBA range: start 0x0 length 0x8000 00:08:31.442 Nvme2n3 : 6.28 121.42 7.59 0.00 0.00 838787.30 10586.58 2645637.91 00:08:31.442 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.442 Verification LBA range: start 0x8000 length 0x8000 00:08:31.442 Nvme2n3 : 6.06 131.08 8.19 0.00 0.00 811078.92 44161.18 1142141.24 00:08:31.442 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.442 Verification LBA range: start 0x0 length 0x2000 00:08:31.442 Nvme3n1 : 6.35 169.05 10.57 0.00 0.00 583864.06 204.80 2090699.22 00:08:31.442 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.442 Verification LBA range: start 0x2000 length 0x2000 00:08:31.442 Nvme3n1 : 6.12 150.75 9.42 0.00 0.00 687202.72 576.59 1167952.34 00:08:31.442 [2024-12-05T12:12:02.311Z] =================================================================================================================== 00:08:31.442 [2024-12-05T12:12:02.311Z] Total : 1544.51 96.53 0.00 0.00 1000230.58 204.80 2645637.91 00:08:35.623 00:08:35.623 real 0m11.112s 00:08:35.623 user 0m21.253s 00:08:35.623 sys 0m0.255s 00:08:35.623 12:12:06 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.623 12:12:06 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:35.623 ************************************ 00:08:35.623 END TEST bdev_verify_big_io 00:08:35.623 ************************************ 00:08:35.623 12:12:06 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:35.623 12:12:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:35.623 12:12:06 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.623 12:12:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:35.623 ************************************ 00:08:35.623 START TEST bdev_write_zeroes 00:08:35.623 ************************************ 00:08:35.623 12:12:06 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:35.623 [2024-12-05 12:12:06.191356] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:35.623 [2024-12-05 12:12:06.191491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62306 ] 00:08:35.623 [2024-12-05 12:12:06.350882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.623 [2024-12-05 12:12:06.471314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.557 Running I/O for 1 seconds... 00:08:37.489 61376.00 IOPS, 239.75 MiB/s 00:08:37.489 Latency(us) 00:08:37.489 [2024-12-05T12:12:08.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.489 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.489 Nvme0n1 : 1.02 8769.21 34.25 0.00 0.00 14562.56 11746.07 26416.05 00:08:37.489 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.489 Nvme1n1p1 : 1.02 8758.49 34.21 0.00 0.00 14557.17 11695.66 26012.75 00:08:37.489 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.489 Nvme1n1p2 : 1.02 8747.08 34.17 0.00 0.00 14509.81 11897.30 23794.61 00:08:37.489 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.489 Nvme2n1 : 1.03 8737.12 34.13 0.00 0.00 14503.20 11695.66 23088.84 00:08:37.489 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.489 Nvme2n2 : 1.03 8727.27 34.09 0.00 0.00 14485.70 11947.72 22483.89 00:08:37.489 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.489 Nvme2n3 : 1.03 8717.34 34.05 0.00 0.00 14438.30 10788.23 24097.08 00:08:37.489 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:37.489 Nvme3n1 : 1.03 8707.49 34.01 0.00 0.00 14408.96 8318.03 25811.10 00:08:37.489 [2024-12-05T12:12:08.358Z] =================================================================================================================== 00:08:37.489 [2024-12-05T12:12:08.358Z] Total : 61164.00 238.92 0.00 0.00 14495.10 8318.03 26416.05 00:08:38.422 00:08:38.422 real 0m2.808s 00:08:38.422 user 0m2.483s 00:08:38.422 sys 0m0.210s 00:08:38.422 12:12:08 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.422 12:12:08 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:38.422 ************************************ 00:08:38.422 END TEST bdev_write_zeroes 00:08:38.422 ************************************ 00:08:38.422 12:12:08 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:38.422 12:12:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:38.422 12:12:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.422 12:12:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:38.422 ************************************ 00:08:38.422 START TEST bdev_json_nonenclosed 00:08:38.422 ************************************ 00:08:38.422 12:12:08 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:38.422 [2024-12-05 12:12:09.058796] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:38.422 [2024-12-05 12:12:09.058937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62359 ] 00:08:38.422 [2024-12-05 12:12:09.219488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.700 [2024-12-05 12:12:09.320398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.700 [2024-12-05 12:12:09.320489] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:38.700 [2024-12-05 12:12:09.320507] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:38.700 [2024-12-05 12:12:09.320516] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:38.700 00:08:38.700 real 0m0.512s 00:08:38.700 user 0m0.304s 00:08:38.700 sys 0m0.103s 00:08:38.700 12:12:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.700 12:12:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:38.700 ************************************ 00:08:38.700 END TEST bdev_json_nonenclosed 00:08:38.700 ************************************ 00:08:38.700 12:12:09 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:38.700 12:12:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:38.700 12:12:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.700 12:12:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:38.700 ************************************ 00:08:38.700 START TEST bdev_json_nonarray 00:08:38.700 ************************************ 00:08:38.700 12:12:09 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:38.959 [2024-12-05 12:12:09.609581] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:38.959 [2024-12-05 12:12:09.609887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62390 ] 00:08:38.959 [2024-12-05 12:12:09.770742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.217 [2024-12-05 12:12:09.870968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.217 [2024-12-05 12:12:09.871053] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:39.217 [2024-12-05 12:12:09.871070] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:39.217 [2024-12-05 12:12:09.871079] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:39.217 00:08:39.217 real 0m0.514s 00:08:39.217 user 0m0.301s 00:08:39.217 sys 0m0.108s 00:08:39.217 12:12:10 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.218 12:12:10 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:39.218 ************************************ 00:08:39.218 END TEST bdev_json_nonarray 00:08:39.218 ************************************ 00:08:39.479 12:12:10 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:08:39.479 12:12:10 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:08:39.479 12:12:10 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:39.479 12:12:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.479 12:12:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.479 12:12:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:39.479 ************************************ 00:08:39.479 START TEST bdev_gpt_uuid 00:08:39.479 ************************************ 00:08:39.479 12:12:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:39.479 12:12:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:08:39.479 12:12:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:08:39.479 12:12:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62410 00:08:39.479 12:12:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:39.479 12:12:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62410 00:08:39.479 12:12:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:39.479 12:12:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62410 ']' 00:08:39.479 12:12:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.479 12:12:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.479 12:12:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.479 12:12:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.479 12:12:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:39.479 [2024-12-05 12:12:10.190249] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:39.479 [2024-12-05 12:12:10.190392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62410 ] 00:08:39.741 [2024-12-05 12:12:10.353969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.741 [2024-12-05 12:12:10.457059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.348 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.348 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:40.348 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:40.348 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.348 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:40.608 Some configs were skipped because the RPC state that can call them passed over. 00:08:40.608 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.608 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:08:40.608 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.608 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:40.608 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.608 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:40.608 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.608 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:40.608 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.608 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:08:40.608 { 00:08:40.608 "name": "Nvme1n1p1", 00:08:40.608 "aliases": [ 00:08:40.608 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:40.608 ], 00:08:40.608 "product_name": "GPT Disk", 00:08:40.608 "block_size": 4096, 00:08:40.608 "num_blocks": 655104, 00:08:40.608 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:40.608 "assigned_rate_limits": { 00:08:40.608 "rw_ios_per_sec": 0, 00:08:40.608 "rw_mbytes_per_sec": 0, 00:08:40.608 "r_mbytes_per_sec": 0, 00:08:40.608 "w_mbytes_per_sec": 0 00:08:40.608 }, 00:08:40.608 "claimed": false, 00:08:40.608 "zoned": false, 00:08:40.608 "supported_io_types": { 00:08:40.608 "read": true, 00:08:40.608 "write": true, 00:08:40.608 "unmap": true, 00:08:40.608 "flush": true, 00:08:40.608 "reset": true, 00:08:40.608 "nvme_admin": false, 00:08:40.608 "nvme_io": false, 00:08:40.608 "nvme_io_md": false, 00:08:40.608 "write_zeroes": true, 00:08:40.608 "zcopy": false, 00:08:40.608 "get_zone_info": false, 00:08:40.608 "zone_management": false, 00:08:40.608 "zone_append": false, 00:08:40.608 "compare": true, 00:08:40.608 "compare_and_write": false, 00:08:40.608 "abort": true, 00:08:40.608 "seek_hole": false, 00:08:40.608 "seek_data": false, 00:08:40.608 "copy": true, 00:08:40.608 "nvme_iov_md": false 00:08:40.608 }, 00:08:40.608 "driver_specific": { 00:08:40.608 "gpt": { 00:08:40.608 "base_bdev": "Nvme1n1", 00:08:40.608 "offset_blocks": 256, 00:08:40.608 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:40.608 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:40.608 "partition_name": "SPDK_TEST_first" 00:08:40.608 } 00:08:40.608 } 00:08:40.608 } 00:08:40.608 ]' 00:08:40.608 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:08:40.869 { 00:08:40.869 "name": "Nvme1n1p2", 00:08:40.869 "aliases": [ 00:08:40.869 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:40.869 ], 00:08:40.869 "product_name": "GPT Disk", 00:08:40.869 "block_size": 4096, 00:08:40.869 "num_blocks": 655103, 00:08:40.869 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:40.869 "assigned_rate_limits": { 00:08:40.869 "rw_ios_per_sec": 0, 00:08:40.869 "rw_mbytes_per_sec": 0, 00:08:40.869 "r_mbytes_per_sec": 0, 00:08:40.869 "w_mbytes_per_sec": 0 00:08:40.869 }, 00:08:40.869 "claimed": false, 00:08:40.869 "zoned": false, 00:08:40.869 "supported_io_types": { 00:08:40.869 "read": true, 00:08:40.869 "write": true, 00:08:40.869 "unmap": true, 00:08:40.869 "flush": true, 00:08:40.869 "reset": true, 00:08:40.869 "nvme_admin": false, 00:08:40.869 "nvme_io": false, 00:08:40.869 "nvme_io_md": false, 00:08:40.869 "write_zeroes": true, 00:08:40.869 "zcopy": false, 00:08:40.869 "get_zone_info": false, 00:08:40.869 "zone_management": false, 00:08:40.869 "zone_append": false, 00:08:40.869 "compare": true, 00:08:40.869 "compare_and_write": false, 00:08:40.869 "abort": true, 00:08:40.869 "seek_hole": false, 00:08:40.869 "seek_data": false, 00:08:40.869 "copy": true, 00:08:40.869 "nvme_iov_md": false 00:08:40.869 }, 00:08:40.869 "driver_specific": { 00:08:40.869 "gpt": { 00:08:40.869 "base_bdev": "Nvme1n1", 00:08:40.869 "offset_blocks": 655360, 00:08:40.869 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:40.869 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:40.869 "partition_name": "SPDK_TEST_second" 00:08:40.869 } 00:08:40.869 } 00:08:40.869 } 00:08:40.869 ]' 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62410 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62410 ']' 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62410 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62410 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62410' 00:08:40.869 killing process with pid 62410 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62410 00:08:40.869 12:12:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62410 00:08:42.781 00:08:42.781 real 0m3.360s 00:08:42.781 user 0m3.496s 00:08:42.781 sys 0m0.441s 00:08:42.781 12:12:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.781 ************************************ 00:08:42.781 END TEST bdev_gpt_uuid 00:08:42.781 ************************************ 00:08:42.781 12:12:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:42.781 12:12:13 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:08:42.781 12:12:13 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:42.781 12:12:13 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:08:42.781 12:12:13 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:42.781 12:12:13 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:42.781 12:12:13 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:42.781 12:12:13 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:42.781 12:12:13 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:42.781 12:12:13 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:43.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:43.378 Waiting for block devices as requested 00:08:43.378 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:43.378 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:43.638 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:43.638 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:48.915 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:48.915 12:12:19 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:48.915 12:12:19 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:48.915 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:48.915 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:48.915 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:48.915 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:48.915 12:12:19 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:48.915 00:08:48.915 real 1m3.529s 00:08:48.915 user 1m20.550s 00:08:48.915 sys 0m9.247s 00:08:48.915 12:12:19 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.915 12:12:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.915 ************************************ 00:08:48.915 END TEST blockdev_nvme_gpt 00:08:48.915 ************************************ 00:08:48.915 12:12:19 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:48.915 12:12:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.915 12:12:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.915 12:12:19 -- common/autotest_common.sh@10 -- # set +x 00:08:48.915 ************************************ 00:08:48.915 START TEST nvme 00:08:48.915 ************************************ 00:08:48.915 12:12:19 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:49.174 * Looking for test storage... 00:08:49.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:49.174 12:12:19 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:49.174 12:12:19 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:49.174 12:12:19 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:49.174 12:12:19 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:49.174 12:12:19 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.174 12:12:19 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.174 12:12:19 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.174 12:12:19 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.174 12:12:19 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.174 12:12:19 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.174 12:12:19 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.174 12:12:19 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.174 12:12:19 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.174 12:12:19 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.174 12:12:19 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.174 12:12:19 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:49.174 12:12:19 nvme -- scripts/common.sh@345 -- # : 1 00:08:49.174 12:12:19 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.174 12:12:19 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.174 12:12:19 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:49.174 12:12:19 nvme -- scripts/common.sh@353 -- # local d=1 00:08:49.174 12:12:19 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.174 12:12:19 nvme -- scripts/common.sh@355 -- # echo 1 00:08:49.174 12:12:19 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.174 12:12:19 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:49.174 12:12:19 nvme -- scripts/common.sh@353 -- # local d=2 00:08:49.174 12:12:19 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.174 12:12:19 nvme -- scripts/common.sh@355 -- # echo 2 00:08:49.174 12:12:19 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.174 12:12:19 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.174 12:12:19 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.174 12:12:19 nvme -- scripts/common.sh@368 -- # return 0 00:08:49.174 12:12:19 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.174 12:12:19 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:49.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.174 --rc genhtml_branch_coverage=1 00:08:49.174 --rc genhtml_function_coverage=1 00:08:49.174 --rc genhtml_legend=1 00:08:49.174 --rc geninfo_all_blocks=1 00:08:49.174 --rc geninfo_unexecuted_blocks=1 00:08:49.174 00:08:49.174 ' 00:08:49.174 12:12:19 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:49.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.174 --rc genhtml_branch_coverage=1 00:08:49.174 --rc genhtml_function_coverage=1 00:08:49.174 --rc genhtml_legend=1 00:08:49.174 --rc geninfo_all_blocks=1 00:08:49.174 --rc geninfo_unexecuted_blocks=1 00:08:49.174 00:08:49.174 ' 00:08:49.174 12:12:19 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:49.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.174 --rc genhtml_branch_coverage=1 00:08:49.174 --rc genhtml_function_coverage=1 00:08:49.174 --rc genhtml_legend=1 00:08:49.174 --rc geninfo_all_blocks=1 00:08:49.174 --rc geninfo_unexecuted_blocks=1 00:08:49.174 00:08:49.174 ' 00:08:49.174 12:12:19 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:49.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.174 --rc genhtml_branch_coverage=1 00:08:49.174 --rc genhtml_function_coverage=1 00:08:49.174 --rc genhtml_legend=1 00:08:49.174 --rc geninfo_all_blocks=1 00:08:49.174 --rc geninfo_unexecuted_blocks=1 00:08:49.174 00:08:49.174 ' 00:08:49.174 12:12:19 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:49.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:49.998 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.998 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.998 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.267 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.267 12:12:20 nvme -- nvme/nvme.sh@79 -- # uname 00:08:50.267 12:12:20 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:50.267 12:12:20 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:50.267 12:12:20 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:50.267 12:12:20 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:50.267 12:12:20 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:50.267 12:12:20 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:50.267 12:12:20 nvme -- common/autotest_common.sh@1075 -- # stubpid=63052 00:08:50.267 Waiting for stub to ready for secondary processes... 00:08:50.267 12:12:20 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:50.267 12:12:20 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:50.267 12:12:20 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63052 ]] 00:08:50.267 12:12:20 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:50.267 12:12:20 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:50.267 [2024-12-05 12:12:21.006104] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:50.267 [2024-12-05 12:12:21.006241] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:51.196 [2024-12-05 12:12:21.966728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:51.196 12:12:21 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:51.196 12:12:21 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63052 ]] 00:08:51.196 12:12:21 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:51.453 [2024-12-05 12:12:22.079952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.453 [2024-12-05 12:12:22.080123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.453 [2024-12-05 12:12:22.080139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.453 [2024-12-05 12:12:22.094681] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:51.453 [2024-12-05 12:12:22.094845] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:51.453 [2024-12-05 12:12:22.104409] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:51.453 [2024-12-05 12:12:22.104638] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:51.453 [2024-12-05 12:12:22.106832] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:51.453 [2024-12-05 12:12:22.107046] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:51.453 [2024-12-05 12:12:22.107160] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:51.453 [2024-12-05 12:12:22.108801] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:51.453 [2024-12-05 12:12:22.108995] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:51.453 [2024-12-05 12:12:22.109102] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:51.453 [2024-12-05 12:12:22.110980] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:51.453 [2024-12-05 12:12:22.111173] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:51.453 [2024-12-05 12:12:22.111231] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:51.453 [2024-12-05 12:12:22.111263] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:51.453 [2024-12-05 12:12:22.111293] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:52.385 12:12:22 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:52.385 done. 00:08:52.385 12:12:22 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:52.385 12:12:22 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:52.385 12:12:22 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:52.385 12:12:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.385 12:12:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.385 ************************************ 00:08:52.385 START TEST nvme_reset 00:08:52.385 ************************************ 00:08:52.385 12:12:22 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:52.643 Initializing NVMe Controllers 00:08:52.643 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:52.643 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:52.643 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:52.643 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:52.643 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:52.643 ************************************ 00:08:52.643 END TEST nvme_reset 00:08:52.643 ************************************ 00:08:52.643 00:08:52.643 real 0m0.302s 00:08:52.643 user 0m0.141s 00:08:52.643 sys 0m0.113s 00:08:52.643 12:12:23 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.643 12:12:23 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:52.643 12:12:23 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:52.643 12:12:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.643 12:12:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.643 12:12:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.643 ************************************ 00:08:52.643 START TEST nvme_identify 00:08:52.643 ************************************ 00:08:52.643 12:12:23 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:52.643 12:12:23 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:52.643 12:12:23 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:52.643 12:12:23 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:52.643 12:12:23 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:52.643 12:12:23 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:52.643 12:12:23 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:52.643 12:12:23 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:52.643 12:12:23 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:52.643 12:12:23 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:52.643 12:12:23 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:52.643 12:12:23 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:52.643 12:12:23 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:52.903 [2024-12-05 12:12:23.648635] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 63086 terminated unexpected 00:08:52.903 ===================================================== 00:08:52.903 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.903 ===================================================== 00:08:52.903 Controller Capabilities/Features 00:08:52.903 ================================ 00:08:52.903 Vendor ID: 1b36 00:08:52.903 Subsystem Vendor ID: 1af4 00:08:52.903 Serial Number: 12340 00:08:52.903 Model Number: QEMU NVMe Ctrl 00:08:52.903 Firmware Version: 8.0.0 00:08:52.903 Recommended Arb Burst: 6 00:08:52.903 IEEE OUI Identifier: 00 54 52 00:08:52.903 Multi-path I/O 00:08:52.903 May have multiple subsystem ports: No 00:08:52.903 May have multiple controllers: No 00:08:52.903 Associated with SR-IOV VF: No 00:08:52.903 Max Data Transfer Size: 524288 00:08:52.903 Max Number of Namespaces: 256 00:08:52.903 Max Number of I/O Queues: 64 00:08:52.903 NVMe Specification Version (VS): 1.4 00:08:52.903 NVMe Specification Version (Identify): 1.4 00:08:52.903 Maximum Queue Entries: 2048 00:08:52.903 Contiguous Queues Required: Yes 00:08:52.903 Arbitration Mechanisms Supported 00:08:52.903 Weighted Round Robin: Not Supported 00:08:52.903 Vendor Specific: Not Supported 00:08:52.903 Reset Timeout: 7500 ms 00:08:52.903 Doorbell Stride: 4 bytes 00:08:52.903 NVM Subsystem Reset: Not Supported 00:08:52.903 Command Sets Supported 00:08:52.903 NVM Command Set: Supported 00:08:52.903 Boot Partition: Not Supported 00:08:52.903 Memory Page Size Minimum: 4096 bytes 00:08:52.903 Memory Page Size Maximum: 65536 bytes 00:08:52.903 Persistent Memory Region: Not Supported 00:08:52.903 Optional Asynchronous Events Supported 00:08:52.903 Namespace Attribute Notices: Supported 00:08:52.903 Firmware Activation Notices: Not Supported 00:08:52.903 ANA Change Notices: Not Supported 00:08:52.903 PLE Aggregate Log Change Notices: Not Supported 00:08:52.903 LBA Status Info Alert Notices: Not Supported 00:08:52.903 EGE Aggregate Log Change Notices: Not Supported 00:08:52.903 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.903 Zone Descriptor Change Notices: Not Supported 00:08:52.903 Discovery Log Change Notices: Not Supported 00:08:52.903 Controller Attributes 00:08:52.903 128-bit Host Identifier: Not Supported 00:08:52.903 Non-Operational Permissive Mode: Not Supported 00:08:52.903 NVM Sets: Not Supported 00:08:52.903 Read Recovery Levels: Not Supported 00:08:52.903 Endurance Groups: Not Supported 00:08:52.903 Predictable Latency Mode: Not Supported 00:08:52.903 Traffic Based Keep ALive: Not Supported 00:08:52.903 Namespace Granularity: Not Supported 00:08:52.903 SQ Associations: Not Supported 00:08:52.903 UUID List: Not Supported 00:08:52.903 Multi-Domain Subsystem: Not Supported 00:08:52.903 Fixed Capacity Management: Not Supported 00:08:52.903 Variable Capacity Management: Not Supported 00:08:52.903 Delete Endurance Group: Not Supported 00:08:52.903 Delete NVM Set: Not Supported 00:08:52.903 Extended LBA Formats Supported: Supported 00:08:52.903 Flexible Data Placement Supported: Not Supported 00:08:52.903 00:08:52.903 Controller Memory Buffer Support 00:08:52.903 ================================ 00:08:52.903 Supported: No 00:08:52.903 00:08:52.903 Persistent Memory Region Support 00:08:52.903 ================================ 00:08:52.903 Supported: No 00:08:52.903 00:08:52.903 Admin Command Set Attributes 00:08:52.903 ============================ 00:08:52.903 Security Send/Receive: Not Supported 00:08:52.903 Format NVM: Supported 00:08:52.903 Firmware Activate/Download: Not Supported 00:08:52.903 Namespace Management: Supported 00:08:52.903 Device Self-Test: Not Supported 00:08:52.903 Directives: Supported 00:08:52.903 NVMe-MI: Not Supported 00:08:52.903 Virtualization Management: Not Supported 00:08:52.903 Doorbell Buffer Config: Supported 00:08:52.903 Get LBA Status Capability: Not Supported 00:08:52.903 Command & Feature Lockdown Capability: Not Supported 00:08:52.903 Abort Command Limit: 4 00:08:52.903 Async Event Request Limit: 4 00:08:52.903 Number of Firmware Slots: N/A 00:08:52.903 Firmware Slot 1 Read-Only: N/A 00:08:52.903 Firmware Activation Without Reset: N/A 00:08:52.903 Multiple Update Detection Support: N/A 00:08:52.903 Firmware Update Granularity: No Information Provided 00:08:52.903 Per-Namespace SMART Log: Yes 00:08:52.903 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.903 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:52.903 Command Effects Log Page: Supported 00:08:52.903 Get Log Page Extended Data: Supported 00:08:52.903 Telemetry Log Pages: Not Supported 00:08:52.903 Persistent Event Log Pages: Not Supported 00:08:52.903 Supported Log Pages Log Page: May Support 00:08:52.903 Commands Supported & Effects Log Page: Not Supported 00:08:52.903 Feature Identifiers & Effects Log Page:May Support 00:08:52.903 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.903 Data Area 4 for Telemetry Log: Not Supported 00:08:52.903 Error Log Page Entries Supported: 1 00:08:52.903 Keep Alive: Not Supported 00:08:52.903 00:08:52.903 NVM Command Set Attributes 00:08:52.903 ========================== 00:08:52.903 Submission Queue Entry Size 00:08:52.903 Max: 64 00:08:52.903 Min: 64 00:08:52.903 Completion Queue Entry Size 00:08:52.903 Max: 16 00:08:52.903 Min: 16 00:08:52.903 Number of Namespaces: 256 00:08:52.903 Compare Command: Supported 00:08:52.903 Write Uncorrectable Command: Not Supported 00:08:52.903 Dataset Management Command: Supported 00:08:52.903 Write Zeroes Command: Supported 00:08:52.903 Set Features Save Field: Supported 00:08:52.904 Reservations: Not Supported 00:08:52.904 Timestamp: Supported 00:08:52.904 Copy: Supported 00:08:52.904 Volatile Write Cache: Present 00:08:52.904 Atomic Write Unit (Normal): 1 00:08:52.904 Atomic Write Unit (PFail): 1 00:08:52.904 Atomic Compare & Write Unit: 1 00:08:52.904 Fused Compare & Write: Not Supported 00:08:52.904 Scatter-Gather List 00:08:52.904 SGL Command Set: Supported 00:08:52.904 SGL Keyed: Not Supported 00:08:52.904 SGL Bit Bucket Descriptor: Not Supported 00:08:52.904 SGL Metadata Pointer: Not Supported 00:08:52.904 Oversized SGL: Not Supported 00:08:52.904 SGL Metadata Address: Not Supported 00:08:52.904 SGL Offset: Not Supported 00:08:52.904 Transport SGL Data Block: Not Supported 00:08:52.904 Replay Protected Memory Block: Not Supported 00:08:52.904 00:08:52.904 Firmware Slot Information 00:08:52.904 ========================= 00:08:52.904 Active slot: 1 00:08:52.904 Slot 1 Firmware Revision: 1.0 00:08:52.904 00:08:52.904 00:08:52.904 Commands Supported and Effects 00:08:52.904 ============================== 00:08:52.904 Admin Commands 00:08:52.904 -------------- 00:08:52.904 Delete I/O Submission Queue (00h): Supported 00:08:52.904 Create I/O Submission Queue (01h): Supported 00:08:52.904 Get Log Page (02h): Supported 00:08:52.904 Delete I/O Completion Queue (04h): Supported 00:08:52.904 Create I/O Completion Queue (05h): Supported 00:08:52.904 Identify (06h): Supported 00:08:52.904 Abort (08h): Supported 00:08:52.904 Set Features (09h): Supported 00:08:52.904 Get Features (0Ah): Supported 00:08:52.904 Asynchronous Event Request (0Ch): Supported 00:08:52.904 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.904 Directive Send (19h): Supported 00:08:52.904 Directive Receive (1Ah): Supported 00:08:52.904 Virtualization Management (1Ch): Supported 00:08:52.904 Doorbell Buffer Config (7Ch): Supported 00:08:52.904 Format NVM (80h): Supported LBA-Change 00:08:52.904 I/O Commands 00:08:52.904 ------------ 00:08:52.904 Flush (00h): Supported LBA-Change 00:08:52.904 Write (01h): Supported LBA-Change 00:08:52.904 Read (02h): Supported 00:08:52.904 Compare (05h): Supported 00:08:52.904 Write Zeroes (08h): Supported LBA-Change 00:08:52.904 Dataset Management (09h): Supported LBA-Change 00:08:52.904 Unknown (0Ch): Supported 00:08:52.904 Unknown (12h): Supported 00:08:52.904 Copy (19h): Supported LBA-Change 00:08:52.904 Unknown (1Dh): Supported LBA-Change 00:08:52.904 00:08:52.904 Error Log 00:08:52.904 ========= 00:08:52.904 00:08:52.904 Arbitration 00:08:52.904 =========== 00:08:52.904 Arbitration Burst: no limit 00:08:52.904 00:08:52.904 Power Management 00:08:52.904 ================ 00:08:52.904 Number of Power States: 1 00:08:52.904 Current Power State: Power State #0 00:08:52.904 Power State #0: 00:08:52.904 Max Power: 25.00 W 00:08:52.904 Non-Operational State: Operational 00:08:52.904 Entry Latency: 16 microseconds 00:08:52.904 Exit Latency: 4 microseconds 00:08:52.904 Relative Read Throughput: 0 00:08:52.904 Relative Read Latency: 0 00:08:52.904 Relative Write Throughput: 0 00:08:52.904 Relative Write Latency: 0 00:08:52.904 Idle Power[2024-12-05 12:12:23.650157] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 63086 terminated unexpected 00:08:52.904 : Not Reported 00:08:52.904 Active Power: Not Reported 00:08:52.904 Non-Operational Permissive Mode: Not Supported 00:08:52.904 00:08:52.904 Health Information 00:08:52.904 ================== 00:08:52.904 Critical Warnings: 00:08:52.904 Available Spare Space: OK 00:08:52.904 Temperature: OK 00:08:52.904 Device Reliability: OK 00:08:52.904 Read Only: No 00:08:52.904 Volatile Memory Backup: OK 00:08:52.904 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.904 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.904 Available Spare: 0% 00:08:52.904 Available Spare Threshold: 0% 00:08:52.904 Life Percentage Used: 0% 00:08:52.904 Data Units Read: 600 00:08:52.904 Data Units Written: 528 00:08:52.904 Host Read Commands: 34900 00:08:52.904 Host Write Commands: 34686 00:08:52.904 Controller Busy Time: 0 minutes 00:08:52.904 Power Cycles: 0 00:08:52.904 Power On Hours: 0 hours 00:08:52.904 Unsafe Shutdowns: 0 00:08:52.904 Unrecoverable Media Errors: 0 00:08:52.904 Lifetime Error Log Entries: 0 00:08:52.904 Warning Temperature Time: 0 minutes 00:08:52.904 Critical Temperature Time: 0 minutes 00:08:52.904 00:08:52.904 Number of Queues 00:08:52.904 ================ 00:08:52.904 Number of I/O Submission Queues: 64 00:08:52.904 Number of I/O Completion Queues: 64 00:08:52.904 00:08:52.904 ZNS Specific Controller Data 00:08:52.904 ============================ 00:08:52.904 Zone Append Size Limit: 0 00:08:52.904 00:08:52.904 00:08:52.904 Active Namespaces 00:08:52.904 ================= 00:08:52.904 Namespace ID:1 00:08:52.904 Error Recovery Timeout: Unlimited 00:08:52.904 Command Set Identifier: NVM (00h) 00:08:52.904 Deallocate: Supported 00:08:52.904 Deallocated/Unwritten Error: Supported 00:08:52.904 Deallocated Read Value: All 0x00 00:08:52.904 Deallocate in Write Zeroes: Not Supported 00:08:52.904 Deallocated Guard Field: 0xFFFF 00:08:52.904 Flush: Supported 00:08:52.904 Reservation: Not Supported 00:08:52.904 Metadata Transferred as: Separate Metadata Buffer 00:08:52.904 Namespace Sharing Capabilities: Private 00:08:52.904 Size (in LBAs): 1548666 (5GiB) 00:08:52.904 Capacity (in LBAs): 1548666 (5GiB) 00:08:52.904 Utilization (in LBAs): 1548666 (5GiB) 00:08:52.904 Thin Provisioning: Not Supported 00:08:52.904 Per-NS Atomic Units: No 00:08:52.904 Maximum Single Source Range Length: 128 00:08:52.904 Maximum Copy Length: 128 00:08:52.904 Maximum Source Range Count: 128 00:08:52.904 NGUID/EUI64 Never Reused: No 00:08:52.904 Namespace Write Protected: No 00:08:52.904 Number of LBA Formats: 8 00:08:52.904 Current LBA Format: LBA Format #07 00:08:52.904 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.904 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.904 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.904 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.904 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.904 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.904 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.904 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.904 00:08:52.904 NVM Specific Namespace Data 00:08:52.904 =========================== 00:08:52.904 Logical Block Storage Tag Mask: 0 00:08:52.904 Protection Information Capabilities: 00:08:52.904 16b Guard Protection Information Storage Tag Support: No 00:08:52.904 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.904 Storage Tag Check Read Support: No 00:08:52.904 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.904 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.904 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.904 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.904 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.904 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.904 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.904 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.904 ===================================================== 00:08:52.904 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.904 ===================================================== 00:08:52.904 Controller Capabilities/Features 00:08:52.904 ================================ 00:08:52.904 Vendor ID: 1b36 00:08:52.904 Subsystem Vendor ID: 1af4 00:08:52.904 Serial Number: 12341 00:08:52.904 Model Number: QEMU NVMe Ctrl 00:08:52.904 Firmware Version: 8.0.0 00:08:52.904 Recommended Arb Burst: 6 00:08:52.904 IEEE OUI Identifier: 00 54 52 00:08:52.904 Multi-path I/O 00:08:52.904 May have multiple subsystem ports: No 00:08:52.904 May have multiple controllers: No 00:08:52.904 Associated with SR-IOV VF: No 00:08:52.904 Max Data Transfer Size: 524288 00:08:52.904 Max Number of Namespaces: 256 00:08:52.904 Max Number of I/O Queues: 64 00:08:52.904 NVMe Specification Version (VS): 1.4 00:08:52.904 NVMe Specification Version (Identify): 1.4 00:08:52.904 Maximum Queue Entries: 2048 00:08:52.904 Contiguous Queues Required: Yes 00:08:52.904 Arbitration Mechanisms Supported 00:08:52.904 Weighted Round Robin: Not Supported 00:08:52.904 Vendor Specific: Not Supported 00:08:52.904 Reset Timeout: 7500 ms 00:08:52.904 Doorbell Stride: 4 bytes 00:08:52.904 NVM Subsystem Reset: Not Supported 00:08:52.904 Command Sets Supported 00:08:52.904 NVM Command Set: Supported 00:08:52.904 Boot Partition: Not Supported 00:08:52.904 Memory Page Size Minimum: 4096 bytes 00:08:52.904 Memory Page Size Maximum: 65536 bytes 00:08:52.904 Persistent Memory Region: Not Supported 00:08:52.904 Optional Asynchronous Events Supported 00:08:52.904 Namespace Attribute Notices: Supported 00:08:52.904 Firmware Activation Notices: Not Supported 00:08:52.904 ANA Change Notices: Not Supported 00:08:52.904 PLE Aggregate Log Change Notices: Not Supported 00:08:52.904 LBA Status Info Alert Notices: Not Supported 00:08:52.904 EGE Aggregate Log Change Notices: Not Supported 00:08:52.904 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.904 Zone Descriptor Change Notices: Not Supported 00:08:52.904 Discovery Log Change Notices: Not Supported 00:08:52.904 Controller Attributes 00:08:52.904 128-bit Host Identifier: Not Supported 00:08:52.904 Non-Operational Permissive Mode: Not Supported 00:08:52.904 NVM Sets: Not Supported 00:08:52.904 Read Recovery Levels: Not Supported 00:08:52.904 Endurance Groups: Not Supported 00:08:52.904 Predictable Latency Mode: Not Supported 00:08:52.904 Traffic Based Keep ALive: Not Supported 00:08:52.904 Namespace Granularity: Not Supported 00:08:52.904 SQ Associations: Not Supported 00:08:52.904 UUID List: Not Supported 00:08:52.904 Multi-Domain Subsystem: Not Supported 00:08:52.904 Fixed Capacity Management: Not Supported 00:08:52.904 Variable Capacity Management: Not Supported 00:08:52.904 Delete Endurance Group: Not Supported 00:08:52.904 Delete NVM Set: Not Supported 00:08:52.904 Extended LBA Formats Supported: Supported 00:08:52.904 Flexible Data Placement Supported: Not Supported 00:08:52.904 00:08:52.904 Controller Memory Buffer Support 00:08:52.904 ================================ 00:08:52.904 Supported: No 00:08:52.904 00:08:52.904 Persistent Memory Region Support 00:08:52.904 ================================ 00:08:52.904 Supported: No 00:08:52.904 00:08:52.904 Admin Command Set Attributes 00:08:52.904 ============================ 00:08:52.904 Security Send/Receive: Not Supported 00:08:52.904 Format NVM: Supported 00:08:52.904 Firmware Activate/Download: Not Supported 00:08:52.904 Namespace Management: Supported 00:08:52.904 Device Self-Test: Not Supported 00:08:52.904 Directives: Supported 00:08:52.904 NVMe-MI: Not Supported 00:08:52.904 Virtualization Management: Not Supported 00:08:52.904 Doorbell Buffer Config: Supported 00:08:52.904 Get LBA Status Capability: Not Supported 00:08:52.904 Command & Feature Lockdown Capability: Not Supported 00:08:52.904 Abort Command Limit: 4 00:08:52.904 Async Event Request Limit: 4 00:08:52.904 Number of Firmware Slots: N/A 00:08:52.904 Firmware Slot 1 Read-Only: N/A 00:08:52.904 Firmware Activation Without Reset: N/A 00:08:52.904 Multiple Update Detection Support: N/A 00:08:52.904 Firmware Update Granularity: No Information Provided 00:08:52.904 Per-Namespace SMART Log: Yes 00:08:52.904 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.904 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:52.904 Command Effects Log Page: Supported 00:08:52.904 Get Log Page Extended Data: Supported 00:08:52.904 Telemetry Log Pages: Not Supported 00:08:52.904 Persistent Event Log Pages: Not Supported 00:08:52.904 Supported Log Pages Log Page: May Support 00:08:52.904 Commands Supported & Effects Log Page: Not Supported 00:08:52.904 Feature Identifiers & Effects Log Page:May Support 00:08:52.904 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.904 Data Area 4 for Telemetry Log: Not Supported 00:08:52.904 Error Log Page Entries Supported: 1 00:08:52.904 Keep Alive: Not Supported 00:08:52.904 00:08:52.904 NVM Command Set Attributes 00:08:52.904 ========================== 00:08:52.904 Submission Queue Entry Size 00:08:52.904 Max: 64 00:08:52.904 Min: 64 00:08:52.904 Completion Queue Entry Size 00:08:52.904 Max: 16 00:08:52.904 Min: 16 00:08:52.904 Number of Namespaces: 256 00:08:52.904 Compare Command: Supported 00:08:52.904 Write Uncorrectable Command: Not Supported 00:08:52.904 Dataset Management Command: Supported 00:08:52.904 Write Zeroes Command: Supported 00:08:52.904 Set Features Save Field: Supported 00:08:52.904 Reservations: Not Supported 00:08:52.904 Timestamp: Supported 00:08:52.904 Copy: Supported 00:08:52.904 Volatile Write Cache: Present 00:08:52.904 Atomic Write Unit (Normal): 1 00:08:52.904 Atomic Write Unit (PFail): 1 00:08:52.904 Atomic Compare & Write Unit: 1 00:08:52.904 Fused Compare & Write: Not Supported 00:08:52.905 Scatter-Gather List 00:08:52.905 SGL Command Set: Supported 00:08:52.905 SGL Keyed: Not Supported 00:08:52.905 SGL Bit Bucket Descriptor: Not Supported 00:08:52.905 SGL Metadata Pointer: Not Supported 00:08:52.905 Oversized SGL: Not Supported 00:08:52.905 SGL Metadata Address: Not Supported 00:08:52.905 SGL Offset: Not Supported 00:08:52.905 Transport SGL Data Block: Not Supported 00:08:52.905 Replay Protected Memory Block: Not Supported 00:08:52.905 00:08:52.905 Firmware Slot Information 00:08:52.905 ========================= 00:08:52.905 Active slot: 1 00:08:52.905 Slot 1 Firmware Revision: 1.0 00:08:52.905 00:08:52.905 00:08:52.905 Commands Supported and Effects 00:08:52.905 ============================== 00:08:52.905 Admin Commands 00:08:52.905 -------------- 00:08:52.905 Delete I/O Submission Queue (00h): Supported 00:08:52.905 Create I/O Submission Queue (01h): Supported 00:08:52.905 Get Log Page (02h): Supported 00:08:52.905 Delete I/O Completion Queue (04h): Supported 00:08:52.905 Create I/O Completion Queue (05h): Supported 00:08:52.905 Identify (06h): Supported 00:08:52.905 Abort (08h): Supported 00:08:52.905 Set Features (09h): Supported 00:08:52.905 Get Features (0Ah): Supported 00:08:52.905 Asynchronous Event Request (0Ch): Supported 00:08:52.905 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.905 Directive Send (19h): Supported 00:08:52.905 Directive Receive (1Ah): Supported 00:08:52.905 Virtualization Management (1Ch): Supported 00:08:52.905 Doorbell Buffer Config (7Ch): Supported 00:08:52.905 Format NVM (80h): Supported LBA-Change 00:08:52.905 I/O Commands 00:08:52.905 ------------ 00:08:52.905 Flush (00h): Supported LBA-Change 00:08:52.905 Write (01h): Supported LBA-Change 00:08:52.905 Read (02h): Supported 00:08:52.905 Compare (05h): Supported 00:08:52.905 Write Zeroes (08h): Supported LBA-Change 00:08:52.905 Dataset Management (09h): Supported LBA-Change 00:08:52.905 Unknown (0Ch): Supported 00:08:52.905 Unknown (12h): Supported 00:08:52.905 Copy (19h): Supported LBA-Change 00:08:52.905 Unknown (1Dh): Supported LBA-Change 00:08:52.905 00:08:52.905 Error Log 00:08:52.905 ========= 00:08:52.905 00:08:52.905 Arbitration 00:08:52.905 =========== 00:08:52.905 Arbitration Burst: no limit 00:08:52.905 00:08:52.905 Power Management 00:08:52.905 ================ 00:08:52.905 Number of Power States: 1 00:08:52.905 Current Power State: Power State #0 00:08:52.905 Power State #0: 00:08:52.905 Max Power: 25.00 W 00:08:52.905 Non-Operational State: Operational 00:08:52.905 Entry Latency: 16 microseconds 00:08:52.905 Exit Latency: 4 microseconds 00:08:52.905 Relative Read Throughput: 0 00:08:52.905 Relative Read Latency: 0 00:08:52.905 Relative Write Throughput: 0 00:08:52.905 Relative Write Latency: 0 00:08:52.905 Idle Power: Not Reported 00:08:52.905 Active Power: Not Reported 00:08:52.905 Non-Operational Permissive Mode: Not Supported 00:08:52.905 00:08:52.905 Health Information 00:08:52.905 ================== 00:08:52.905 Critical Warnings: 00:08:52.905 Available Spare Space: OK 00:08:52.905 Temperature: [2024-12-05 12:12:23.651123] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 63086 terminated unexpected 00:08:52.905 OK 00:08:52.905 Device Reliability: OK 00:08:52.905 Read Only: No 00:08:52.905 Volatile Memory Backup: OK 00:08:52.905 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.905 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.905 Available Spare: 0% 00:08:52.905 Available Spare Threshold: 0% 00:08:52.905 Life Percentage Used: 0% 00:08:52.905 Data Units Read: 973 00:08:52.905 Data Units Written: 847 00:08:52.905 Host Read Commands: 51773 00:08:52.905 Host Write Commands: 50667 00:08:52.905 Controller Busy Time: 0 minutes 00:08:52.905 Power Cycles: 0 00:08:52.905 Power On Hours: 0 hours 00:08:52.905 Unsafe Shutdowns: 0 00:08:52.905 Unrecoverable Media Errors: 0 00:08:52.905 Lifetime Error Log Entries: 0 00:08:52.905 Warning Temperature Time: 0 minutes 00:08:52.905 Critical Temperature Time: 0 minutes 00:08:52.905 00:08:52.905 Number of Queues 00:08:52.905 ================ 00:08:52.905 Number of I/O Submission Queues: 64 00:08:52.905 Number of I/O Completion Queues: 64 00:08:52.905 00:08:52.905 ZNS Specific Controller Data 00:08:52.905 ============================ 00:08:52.905 Zone Append Size Limit: 0 00:08:52.905 00:08:52.905 00:08:52.905 Active Namespaces 00:08:52.905 ================= 00:08:52.905 Namespace ID:1 00:08:52.905 Error Recovery Timeout: Unlimited 00:08:52.905 Command Set Identifier: NVM (00h) 00:08:52.905 Deallocate: Supported 00:08:52.905 Deallocated/Unwritten Error: Supported 00:08:52.905 Deallocated Read Value: All 0x00 00:08:52.905 Deallocate in Write Zeroes: Not Supported 00:08:52.905 Deallocated Guard Field: 0xFFFF 00:08:52.905 Flush: Supported 00:08:52.905 Reservation: Not Supported 00:08:52.905 Namespace Sharing Capabilities: Private 00:08:52.905 Size (in LBAs): 1310720 (5GiB) 00:08:52.905 Capacity (in LBAs): 1310720 (5GiB) 00:08:52.905 Utilization (in LBAs): 1310720 (5GiB) 00:08:52.905 Thin Provisioning: Not Supported 00:08:52.905 Per-NS Atomic Units: No 00:08:52.905 Maximum Single Source Range Length: 128 00:08:52.905 Maximum Copy Length: 128 00:08:52.905 Maximum Source Range Count: 128 00:08:52.905 NGUID/EUI64 Never Reused: No 00:08:52.905 Namespace Write Protected: No 00:08:52.905 Number of LBA Formats: 8 00:08:52.905 Current LBA Format: LBA Format #04 00:08:52.905 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.905 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.905 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.905 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.905 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.905 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.905 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.905 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.905 00:08:52.905 NVM Specific Namespace Data 00:08:52.905 =========================== 00:08:52.905 Logical Block Storage Tag Mask: 0 00:08:52.905 Protection Information Capabilities: 00:08:52.905 16b Guard Protection Information Storage Tag Support: No 00:08:52.905 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.905 Storage Tag Check Read Support: No 00:08:52.905 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.905 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.905 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.905 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.905 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.905 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.905 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.905 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.905 ===================================================== 00:08:52.905 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.905 ===================================================== 00:08:52.905 Controller Capabilities/Features 00:08:52.905 ================================ 00:08:52.905 Vendor ID: 1b36 00:08:52.905 Subsystem Vendor ID: 1af4 00:08:52.905 Serial Number: 12343 00:08:52.905 Model Number: QEMU NVMe Ctrl 00:08:52.905 Firmware Version: 8.0.0 00:08:52.905 Recommended Arb Burst: 6 00:08:52.905 IEEE OUI Identifier: 00 54 52 00:08:52.905 Multi-path I/O 00:08:52.905 May have multiple subsystem ports: No 00:08:52.905 May have multiple controllers: Yes 00:08:52.905 Associated with SR-IOV VF: No 00:08:52.905 Max Data Transfer Size: 524288 00:08:52.905 Max Number of Namespaces: 256 00:08:52.905 Max Number of I/O Queues: 64 00:08:52.905 NVMe Specification Version (VS): 1.4 00:08:52.905 NVMe Specification Version (Identify): 1.4 00:08:52.905 Maximum Queue Entries: 2048 00:08:52.905 Contiguous Queues Required: Yes 00:08:52.905 Arbitration Mechanisms Supported 00:08:52.905 Weighted Round Robin: Not Supported 00:08:52.905 Vendor Specific: Not Supported 00:08:52.905 Reset Timeout: 7500 ms 00:08:52.905 Doorbell Stride: 4 bytes 00:08:52.905 NVM Subsystem Reset: Not Supported 00:08:52.905 Command Sets Supported 00:08:52.905 NVM Command Set: Supported 00:08:52.905 Boot Partition: Not Supported 00:08:52.905 Memory Page Size Minimum: 4096 bytes 00:08:52.905 Memory Page Size Maximum: 65536 bytes 00:08:52.905 Persistent Memory Region: Not Supported 00:08:52.905 Optional Asynchronous Events Supported 00:08:52.905 Namespace Attribute Notices: Supported 00:08:52.905 Firmware Activation Notices: Not Supported 00:08:52.905 ANA Change Notices: Not Supported 00:08:52.905 PLE Aggregate Log Change Notices: Not Supported 00:08:52.905 LBA Status Info Alert Notices: Not Supported 00:08:52.905 EGE Aggregate Log Change Notices: Not Supported 00:08:52.905 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.905 Zone Descriptor Change Notices: Not Supported 00:08:52.905 Discovery Log Change Notices: Not Supported 00:08:52.905 Controller Attributes 00:08:52.905 128-bit Host Identifier: Not Supported 00:08:52.905 Non-Operational Permissive Mode: Not Supported 00:08:52.905 NVM Sets: Not Supported 00:08:52.905 Read Recovery Levels: Not Supported 00:08:52.905 Endurance Groups: Supported 00:08:52.905 Predictable Latency Mode: Not Supported 00:08:52.905 Traffic Based Keep ALive: Not Supported 00:08:52.905 Namespace Granularity: Not Supported 00:08:52.905 SQ Associations: Not Supported 00:08:52.905 UUID List: Not Supported 00:08:52.905 Multi-Domain Subsystem: Not Supported 00:08:52.905 Fixed Capacity Management: Not Supported 00:08:52.905 Variable Capacity Management: Not Supported 00:08:52.905 Delete Endurance Group: Not Supported 00:08:52.905 Delete NVM Set: Not Supported 00:08:52.905 Extended LBA Formats Supported: Supported 00:08:52.905 Flexible Data Placement Supported: Supported 00:08:52.905 00:08:52.905 Controller Memory Buffer Support 00:08:52.905 ================================ 00:08:52.905 Supported: No 00:08:52.905 00:08:52.905 Persistent Memory Region Support 00:08:52.905 ================================ 00:08:52.905 Supported: No 00:08:52.905 00:08:52.905 Admin Command Set Attributes 00:08:52.905 ============================ 00:08:52.905 Security Send/Receive: Not Supported 00:08:52.905 Format NVM: Supported 00:08:52.905 Firmware Activate/Download: Not Supported 00:08:52.905 Namespace Management: Supported 00:08:52.905 Device Self-Test: Not Supported 00:08:52.905 Directives: Supported 00:08:52.905 NVMe-MI: Not Supported 00:08:52.905 Virtualization Management: Not Supported 00:08:52.905 Doorbell Buffer Config: Supported 00:08:52.905 Get LBA Status Capability: Not Supported 00:08:52.905 Command & Feature Lockdown Capability: Not Supported 00:08:52.905 Abort Command Limit: 4 00:08:52.905 Async Event Request Limit: 4 00:08:52.905 Number of Firmware Slots: N/A 00:08:52.905 Firmware Slot 1 Read-Only: N/A 00:08:52.905 Firmware Activation Without Reset: N/A 00:08:52.905 Multiple Update Detection Support: N/A 00:08:52.905 Firmware Update Granularity: No Information Provided 00:08:52.905 Per-Namespace SMART Log: Yes 00:08:52.905 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.905 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:52.905 Command Effects Log Page: Supported 00:08:52.905 Get Log Page Extended Data: Supported 00:08:52.905 Telemetry Log Pages: Not Supported 00:08:52.905 Persistent Event Log Pages: Not Supported 00:08:52.905 Supported Log Pages Log Page: May Support 00:08:52.905 Commands Supported & Effects Log Page: Not Supported 00:08:52.905 Feature Identifiers & Effects Log Page:May Support 00:08:52.905 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.905 Data Area 4 for Telemetry Log: Not Supported 00:08:52.905 Error Log Page Entries Supported: 1 00:08:52.905 Keep Alive: Not Supported 00:08:52.905 00:08:52.905 NVM Command Set Attributes 00:08:52.905 ========================== 00:08:52.905 Submission Queue Entry Size 00:08:52.905 Max: 64 00:08:52.905 Min: 64 00:08:52.905 Completion Queue Entry Size 00:08:52.905 Max: 16 00:08:52.905 Min: 16 00:08:52.905 Number of Namespaces: 256 00:08:52.905 Compare Command: Supported 00:08:52.905 Write Uncorrectable Command: Not Supported 00:08:52.905 Dataset Management Command: Supported 00:08:52.905 Write Zeroes Command: Supported 00:08:52.905 Set Features Save Field: Supported 00:08:52.905 Reservations: Not Supported 00:08:52.905 Timestamp: Supported 00:08:52.905 Copy: Supported 00:08:52.905 Volatile Write Cache: Present 00:08:52.905 Atomic Write Unit (Normal): 1 00:08:52.905 Atomic Write Unit (PFail): 1 00:08:52.905 Atomic Compare & Write Unit: 1 00:08:52.905 Fused Compare & Write: Not Supported 00:08:52.905 Scatter-Gather List 00:08:52.905 SGL Command Set: Supported 00:08:52.905 SGL Keyed: Not Supported 00:08:52.905 SGL Bit Bucket Descriptor: Not Supported 00:08:52.905 SGL Metadata Pointer: Not Supported 00:08:52.905 Oversized SGL: Not Supported 00:08:52.905 SGL Metadata Address: Not Supported 00:08:52.905 SGL Offset: Not Supported 00:08:52.905 Transport SGL Data Block: Not Supported 00:08:52.905 Replay Protected Memory Block: Not Supported 00:08:52.906 00:08:52.906 Firmware Slot Information 00:08:52.906 ========================= 00:08:52.906 Active slot: 1 00:08:52.906 Slot 1 Firmware Revision: 1.0 00:08:52.906 00:08:52.906 00:08:52.906 Commands Supported and Effects 00:08:52.906 ============================== 00:08:52.906 Admin Commands 00:08:52.906 -------------- 00:08:52.906 Delete I/O Submission Queue (00h): Supported 00:08:52.906 Create I/O Submission Queue (01h): Supported 00:08:52.906 Get Log Page (02h): Supported 00:08:52.906 Delete I/O Completion Queue (04h): Supported 00:08:52.906 Create I/O Completion Queue (05h): Supported 00:08:52.906 Identify (06h): Supported 00:08:52.906 Abort (08h): Supported 00:08:52.906 Set Features (09h): Supported 00:08:52.906 Get Features (0Ah): Supported 00:08:52.906 Asynchronous Event Request (0Ch): Supported 00:08:52.906 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.906 Directive Send (19h): Supported 00:08:52.906 Directive Receive (1Ah): Supported 00:08:52.906 Virtualization Management (1Ch): Supported 00:08:52.906 Doorbell Buffer Config (7Ch): Supported 00:08:52.906 Format NVM (80h): Supported LBA-Change 00:08:52.906 I/O Commands 00:08:52.906 ------------ 00:08:52.906 Flush (00h): Supported LBA-Change 00:08:52.906 Write (01h): Supported LBA-Change 00:08:52.906 Read (02h): Supported 00:08:52.906 Compare (05h): Supported 00:08:52.906 Write Zeroes (08h): Supported LBA-Change 00:08:52.906 Dataset Management (09h): Supported LBA-Change 00:08:52.906 Unknown (0Ch): Supported 00:08:52.906 Unknown (12h): Supported 00:08:52.906 Copy (19h): Supported LBA-Change 00:08:52.906 Unknown (1Dh): Supported LBA-Change 00:08:52.906 00:08:52.906 Error Log 00:08:52.906 ========= 00:08:52.906 00:08:52.906 Arbitration 00:08:52.906 =========== 00:08:52.906 Arbitration Burst: no limit 00:08:52.906 00:08:52.906 Power Management 00:08:52.906 ================ 00:08:52.906 Number of Power States: 1 00:08:52.906 Current Power State: Power State #0 00:08:52.906 Power State #0: 00:08:52.906 Max Power: 25.00 W 00:08:52.906 Non-Operational State: Operational 00:08:52.906 Entry Latency: 16 microseconds 00:08:52.906 Exit Latency: 4 microseconds 00:08:52.906 Relative Read Throughput: 0 00:08:52.906 Relative Read Latency: 0 00:08:52.906 Relative Write Throughput: 0 00:08:52.906 Relative Write Latency: 0 00:08:52.906 Idle Power: Not Reported 00:08:52.906 Active Power: Not Reported 00:08:52.906 Non-Operational Permissive Mode: Not Supported 00:08:52.906 00:08:52.906 Health Information 00:08:52.906 ================== 00:08:52.906 Critical Warnings: 00:08:52.906 Available Spare Space: OK 00:08:52.906 Temperature: OK 00:08:52.906 Device Reliability: OK 00:08:52.906 Read Only: No 00:08:52.906 Volatile Memory Backup: OK 00:08:52.906 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.906 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.906 Available Spare: 0% 00:08:52.906 Available Spare Threshold: 0% 00:08:52.906 Life Percentage Used: 0% 00:08:52.906 Data Units Read: 818 00:08:52.906 Data Units Written: 747 00:08:52.906 Host Read Commands: 36995 00:08:52.906 Host Write Commands: 36418 00:08:52.906 Controller Busy Time: 0 minutes 00:08:52.906 Power Cycles: 0 00:08:52.906 Power On Hours: 0 hours 00:08:52.906 Unsafe Shutdowns: 0 00:08:52.906 Unrecoverable Media Errors: 0 00:08:52.906 Lifetime Error Log Entries: 0 00:08:52.906 Warning Temperature Time: 0 minutes 00:08:52.906 Critical Temperature Time: 0 minutes 00:08:52.906 00:08:52.906 Number of Queues 00:08:52.906 ================ 00:08:52.906 Number of I/O Submission Queues: 64 00:08:52.906 Number of I/O Completion Queues: 64 00:08:52.906 00:08:52.906 ZNS Specific Controller Data 00:08:52.906 ============================ 00:08:52.906 Zone Append Size Limit: 0 00:08:52.906 00:08:52.906 00:08:52.906 Active Namespaces 00:08:52.906 ================= 00:08:52.906 Namespace ID:1 00:08:52.906 Error Recovery Timeout: Unlimited 00:08:52.906 Command Set Identifier: NVM (00h) 00:08:52.906 Deallocate: Supported 00:08:52.906 Deallocated/Unwritten Error: Supported 00:08:52.906 Deallocated Read Value: All 0x00 00:08:52.906 Deallocate in Write Zeroes: Not Supported 00:08:52.906 Deallocated Guard Field: 0xFFFF 00:08:52.906 Flush: Supported 00:08:52.906 Reservation: Not Supported 00:08:52.906 Namespace Sharing Capabilities: Multiple Controllers 00:08:52.906 Size (in LBAs): 262144 (1GiB) 00:08:52.906 Capacity (in LBAs): 262144 (1GiB) 00:08:52.906 Utilization (in LBAs): 262144 (1GiB) 00:08:52.906 Thin Provisioning: Not Supported 00:08:52.906 Per-NS Atomic Units: No 00:08:52.906 Maximum Single Source Range Length: 128 00:08:52.906 Maximum Copy Length: 128 00:08:52.906 Maximum Source Range Count: 128 00:08:52.906 NGUID/EUI64 Never Reused: No 00:08:52.906 Namespace Write Protected: No 00:08:52.906 Endurance group ID: 1 00:08:52.906 Number of LBA Formats: 8 00:08:52.906 Current LBA Format: LBA Format #04 00:08:52.906 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.906 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.906 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.906 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.906 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.906 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.906 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.906 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.906 00:08:52.906 Get Feature FDP: 00:08:52.906 ================ 00:08:52.906 Enabled: Yes 00:08:52.906 FDP configuration index: 0 00:08:52.906 00:08:52.906 FDP configurations log page 00:08:52.906 =========================== 00:08:52.906 Number of FDP configurations: 1 00:08:52.906 Version: 0 00:08:52.906 Size: 112 00:08:52.906 FDP Configuration Descriptor: 0 00:08:52.906 Descriptor Size: 96 00:08:52.906 Reclaim Group Identifier format: 2 00:08:52.906 FDP Volatile Write Cache: Not Present 00:08:52.906 FDP Configuration: Valid 00:08:52.906 Vendor Specific Size: 0 00:08:52.906 Number of Reclaim Groups: 2 00:08:52.906 Number of Recalim Unit Handles: 8 00:08:52.906 Max Placement Identifiers: 128 00:08:52.906 Number of Namespaces Suppprted: 256 00:08:52.906 Reclaim unit Nominal Size: 6000000 bytes 00:08:52.906 Estimated Reclaim Unit Time Limit: Not Reported 00:08:52.906 RUH Desc #000: RUH Type: Initially Isolated 00:08:52.906 RUH Desc #001: RUH Type: Initially Isolated 00:08:52.906 RUH Desc #002: RUH Type: Initially Isolated 00:08:52.906 RUH Desc #003: RUH Type: Initially Isolated 00:08:52.906 RUH Desc #004: RUH Type: Initially Isolated 00:08:52.906 RUH Desc #005: RUH Type: Initially Isolated 00:08:52.906 RUH Desc #006: RUH Type: Initially Isolated 00:08:52.906 RUH Desc #007: RUH Type: Initially Isolated 00:08:52.906 00:08:52.906 FDP reclaim unit handle usage log page 00:08:52.906 ====================================== 00:08:52.906 Number of Reclaim Unit Handles: 8 00:08:52.906 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:52.906 RUH Usage Desc #001: RUH Attributes: Unused 00:08:52.906 RUH Usage Desc #002: RUH Attributes: Unused 00:08:52.906 RUH Usage Desc #003: RUH Attributes: Unused 00:08:52.906 RUH Usage Desc #004: RUH Attributes: Unused 00:08:52.906 RUH Usage Desc #005: RUH Attributes: Unused 00:08:52.906 RUH Usage Desc #006: RUH Attributes: Unused 00:08:52.906 RUH Usage Desc #007: RUH Attributes: Unused 00:08:52.906 00:08:52.906 FDP statistics log page 00:08:52.906 ======================= 00:08:52.906 Host bytes with metadata written: 470212608 00:08:52.906 Media[2024-12-05 12:12:23.652923] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 63086 terminated unexpected 00:08:52.906 bytes with metadata written: 470241280 00:08:52.906 Media bytes erased: 0 00:08:52.906 00:08:52.906 FDP events log page 00:08:52.906 =================== 00:08:52.906 Number of FDP events: 0 00:08:52.906 00:08:52.906 NVM Specific Namespace Data 00:08:52.906 =========================== 00:08:52.906 Logical Block Storage Tag Mask: 0 00:08:52.906 Protection Information Capabilities: 00:08:52.906 16b Guard Protection Information Storage Tag Support: No 00:08:52.906 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.906 Storage Tag Check Read Support: No 00:08:52.906 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.906 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.906 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.906 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.906 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.906 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.906 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.906 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.906 ===================================================== 00:08:52.906 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.906 ===================================================== 00:08:52.906 Controller Capabilities/Features 00:08:52.906 ================================ 00:08:52.906 Vendor ID: 1b36 00:08:52.906 Subsystem Vendor ID: 1af4 00:08:52.906 Serial Number: 12342 00:08:52.906 Model Number: QEMU NVMe Ctrl 00:08:52.906 Firmware Version: 8.0.0 00:08:52.906 Recommended Arb Burst: 6 00:08:52.906 IEEE OUI Identifier: 00 54 52 00:08:52.906 Multi-path I/O 00:08:52.906 May have multiple subsystem ports: No 00:08:52.906 May have multiple controllers: No 00:08:52.906 Associated with SR-IOV VF: No 00:08:52.906 Max Data Transfer Size: 524288 00:08:52.906 Max Number of Namespaces: 256 00:08:52.906 Max Number of I/O Queues: 64 00:08:52.906 NVMe Specification Version (VS): 1.4 00:08:52.906 NVMe Specification Version (Identify): 1.4 00:08:52.906 Maximum Queue Entries: 2048 00:08:52.906 Contiguous Queues Required: Yes 00:08:52.906 Arbitration Mechanisms Supported 00:08:52.906 Weighted Round Robin: Not Supported 00:08:52.906 Vendor Specific: Not Supported 00:08:52.906 Reset Timeout: 7500 ms 00:08:52.906 Doorbell Stride: 4 bytes 00:08:52.906 NVM Subsystem Reset: Not Supported 00:08:52.906 Command Sets Supported 00:08:52.906 NVM Command Set: Supported 00:08:52.906 Boot Partition: Not Supported 00:08:52.906 Memory Page Size Minimum: 4096 bytes 00:08:52.906 Memory Page Size Maximum: 65536 bytes 00:08:52.906 Persistent Memory Region: Not Supported 00:08:52.906 Optional Asynchronous Events Supported 00:08:52.906 Namespace Attribute Notices: Supported 00:08:52.906 Firmware Activation Notices: Not Supported 00:08:52.906 ANA Change Notices: Not Supported 00:08:52.906 PLE Aggregate Log Change Notices: Not Supported 00:08:52.906 LBA Status Info Alert Notices: Not Supported 00:08:52.906 EGE Aggregate Log Change Notices: Not Supported 00:08:52.906 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.906 Zone Descriptor Change Notices: Not Supported 00:08:52.906 Discovery Log Change Notices: Not Supported 00:08:52.906 Controller Attributes 00:08:52.906 128-bit Host Identifier: Not Supported 00:08:52.906 Non-Operational Permissive Mode: Not Supported 00:08:52.906 NVM Sets: Not Supported 00:08:52.906 Read Recovery Levels: Not Supported 00:08:52.906 Endurance Groups: Not Supported 00:08:52.906 Predictable Latency Mode: Not Supported 00:08:52.906 Traffic Based Keep ALive: Not Supported 00:08:52.906 Namespace Granularity: Not Supported 00:08:52.906 SQ Associations: Not Supported 00:08:52.906 UUID List: Not Supported 00:08:52.906 Multi-Domain Subsystem: Not Supported 00:08:52.906 Fixed Capacity Management: Not Supported 00:08:52.906 Variable Capacity Management: Not Supported 00:08:52.906 Delete Endurance Group: Not Supported 00:08:52.906 Delete NVM Set: Not Supported 00:08:52.906 Extended LBA Formats Supported: Supported 00:08:52.906 Flexible Data Placement Supported: Not Supported 00:08:52.906 00:08:52.906 Controller Memory Buffer Support 00:08:52.906 ================================ 00:08:52.907 Supported: No 00:08:52.907 00:08:52.907 Persistent Memory Region Support 00:08:52.907 ================================ 00:08:52.907 Supported: No 00:08:52.907 00:08:52.907 Admin Command Set Attributes 00:08:52.907 ============================ 00:08:52.907 Security Send/Receive: Not Supported 00:08:52.907 Format NVM: Supported 00:08:52.907 Firmware Activate/Download: Not Supported 00:08:52.907 Namespace Management: Supported 00:08:52.907 Device Self-Test: Not Supported 00:08:52.907 Directives: Supported 00:08:52.907 NVMe-MI: Not Supported 00:08:52.907 Virtualization Management: Not Supported 00:08:52.907 Doorbell Buffer Config: Supported 00:08:52.907 Get LBA Status Capability: Not Supported 00:08:52.907 Command & Feature Lockdown Capability: Not Supported 00:08:52.907 Abort Command Limit: 4 00:08:52.907 Async Event Request Limit: 4 00:08:52.907 Number of Firmware Slots: N/A 00:08:52.907 Firmware Slot 1 Read-Only: N/A 00:08:52.907 Firmware Activation Without Reset: N/A 00:08:52.907 Multiple Update Detection Support: N/A 00:08:52.907 Firmware Update Granularity: No Information Provided 00:08:52.907 Per-Namespace SMART Log: Yes 00:08:52.907 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.907 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:52.907 Command Effects Log Page: Supported 00:08:52.907 Get Log Page Extended Data: Supported 00:08:52.907 Telemetry Log Pages: Not Supported 00:08:52.907 Persistent Event Log Pages: Not Supported 00:08:52.907 Supported Log Pages Log Page: May Support 00:08:52.907 Commands Supported & Effects Log Page: Not Supported 00:08:52.907 Feature Identifiers & Effects Log Page:May Support 00:08:52.907 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.907 Data Area 4 for Telemetry Log: Not Supported 00:08:52.907 Error Log Page Entries Supported: 1 00:08:52.907 Keep Alive: Not Supported 00:08:52.907 00:08:52.907 NVM Command Set Attributes 00:08:52.907 ========================== 00:08:52.907 Submission Queue Entry Size 00:08:52.907 Max: 64 00:08:52.907 Min: 64 00:08:52.907 Completion Queue Entry Size 00:08:52.907 Max: 16 00:08:52.907 Min: 16 00:08:52.907 Number of Namespaces: 256 00:08:52.907 Compare Command: Supported 00:08:52.907 Write Uncorrectable Command: Not Supported 00:08:52.907 Dataset Management Command: Supported 00:08:52.907 Write Zeroes Command: Supported 00:08:52.907 Set Features Save Field: Supported 00:08:52.907 Reservations: Not Supported 00:08:52.907 Timestamp: Supported 00:08:52.907 Copy: Supported 00:08:52.907 Volatile Write Cache: Present 00:08:52.907 Atomic Write Unit (Normal): 1 00:08:52.907 Atomic Write Unit (PFail): 1 00:08:52.907 Atomic Compare & Write Unit: 1 00:08:52.907 Fused Compare & Write: Not Supported 00:08:52.907 Scatter-Gather List 00:08:52.907 SGL Command Set: Supported 00:08:52.907 SGL Keyed: Not Supported 00:08:52.907 SGL Bit Bucket Descriptor: Not Supported 00:08:52.907 SGL Metadata Pointer: Not Supported 00:08:52.907 Oversized SGL: Not Supported 00:08:52.907 SGL Metadata Address: Not Supported 00:08:52.907 SGL Offset: Not Supported 00:08:52.907 Transport SGL Data Block: Not Supported 00:08:52.907 Replay Protected Memory Block: Not Supported 00:08:52.907 00:08:52.907 Firmware Slot Information 00:08:52.907 ========================= 00:08:52.907 Active slot: 1 00:08:52.907 Slot 1 Firmware Revision: 1.0 00:08:52.907 00:08:52.907 00:08:52.907 Commands Supported and Effects 00:08:52.907 ============================== 00:08:52.907 Admin Commands 00:08:52.907 -------------- 00:08:52.907 Delete I/O Submission Queue (00h): Supported 00:08:52.907 Create I/O Submission Queue (01h): Supported 00:08:52.907 Get Log Page (02h): Supported 00:08:52.907 Delete I/O Completion Queue (04h): Supported 00:08:52.907 Create I/O Completion Queue (05h): Supported 00:08:52.907 Identify (06h): Supported 00:08:52.907 Abort (08h): Supported 00:08:52.907 Set Features (09h): Supported 00:08:52.907 Get Features (0Ah): Supported 00:08:52.907 Asynchronous Event Request (0Ch): Supported 00:08:52.907 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.907 Directive Send (19h): Supported 00:08:52.907 Directive Receive (1Ah): Supported 00:08:52.907 Virtualization Management (1Ch): Supported 00:08:52.907 Doorbell Buffer Config (7Ch): Supported 00:08:52.907 Format NVM (80h): Supported LBA-Change 00:08:52.907 I/O Commands 00:08:52.907 ------------ 00:08:52.907 Flush (00h): Supported LBA-Change 00:08:52.907 Write (01h): Supported LBA-Change 00:08:52.907 Read (02h): Supported 00:08:52.907 Compare (05h): Supported 00:08:52.907 Write Zeroes (08h): Supported LBA-Change 00:08:52.907 Dataset Management (09h): Supported LBA-Change 00:08:52.907 Unknown (0Ch): Supported 00:08:52.907 Unknown (12h): Supported 00:08:52.907 Copy (19h): Supported LBA-Change 00:08:52.907 Unknown (1Dh): Supported LBA-Change 00:08:52.907 00:08:52.907 Error Log 00:08:52.907 ========= 00:08:52.907 00:08:52.907 Arbitration 00:08:52.907 =========== 00:08:52.907 Arbitration Burst: no limit 00:08:52.907 00:08:52.907 Power Management 00:08:52.907 ================ 00:08:52.907 Number of Power States: 1 00:08:52.907 Current Power State: Power State #0 00:08:52.907 Power State #0: 00:08:52.907 Max Power: 25.00 W 00:08:52.907 Non-Operational State: Operational 00:08:52.907 Entry Latency: 16 microseconds 00:08:52.907 Exit Latency: 4 microseconds 00:08:52.907 Relative Read Throughput: 0 00:08:52.907 Relative Read Latency: 0 00:08:52.907 Relative Write Throughput: 0 00:08:52.907 Relative Write Latency: 0 00:08:52.907 Idle Power: Not Reported 00:08:52.907 Active Power: Not Reported 00:08:52.907 Non-Operational Permissive Mode: Not Supported 00:08:52.907 00:08:52.907 Health Information 00:08:52.907 ================== 00:08:52.907 Critical Warnings: 00:08:52.907 Available Spare Space: OK 00:08:52.907 Temperature: OK 00:08:52.907 Device Reliability: OK 00:08:52.907 Read Only: No 00:08:52.907 Volatile Memory Backup: OK 00:08:52.907 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.907 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.907 Available Spare: 0% 00:08:52.907 Available Spare Threshold: 0% 00:08:52.907 Life Percentage Used: 0% 00:08:52.907 Data Units Read: 2063 00:08:52.907 Data Units Written: 1850 00:08:52.907 Host Read Commands: 107823 00:08:52.907 Host Write Commands: 106092 00:08:52.907 Controller Busy Time: 0 minutes 00:08:52.907 Power Cycles: 0 00:08:52.907 Power On Hours: 0 hours 00:08:52.907 Unsafe Shutdowns: 0 00:08:52.907 Unrecoverable Media Errors: 0 00:08:52.907 Lifetime Error Log Entries: 0 00:08:52.907 Warning Temperature Time: 0 minutes 00:08:52.907 Critical Temperature Time: 0 minutes 00:08:52.907 00:08:52.907 Number of Queues 00:08:52.907 ================ 00:08:52.907 Number of I/O Submission Queues: 64 00:08:52.907 Number of I/O Completion Queues: 64 00:08:52.907 00:08:52.907 ZNS Specific Controller Data 00:08:52.907 ============================ 00:08:52.907 Zone Append Size Limit: 0 00:08:52.907 00:08:52.907 00:08:52.907 Active Namespaces 00:08:52.907 ================= 00:08:52.907 Namespace ID:1 00:08:52.907 Error Recovery Timeout: Unlimited 00:08:52.907 Command Set Identifier: NVM (00h) 00:08:52.907 Deallocate: Supported 00:08:52.907 Deallocated/Unwritten Error: Supported 00:08:52.907 Deallocated Read Value: All 0x00 00:08:52.907 Deallocate in Write Zeroes: Not Supported 00:08:52.907 Deallocated Guard Field: 0xFFFF 00:08:52.907 Flush: Supported 00:08:52.907 Reservation: Not Supported 00:08:52.907 Namespace Sharing Capabilities: Private 00:08:52.907 Size (in LBAs): 1048576 (4GiB) 00:08:52.907 Capacity (in LBAs): 1048576 (4GiB) 00:08:52.907 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.907 Thin Provisioning: Not Supported 00:08:52.907 Per-NS Atomic Units: No 00:08:52.907 Maximum Single Source Range Length: 128 00:08:52.907 Maximum Copy Length: 128 00:08:52.907 Maximum Source Range Count: 128 00:08:52.907 NGUID/EUI64 Never Reused: No 00:08:52.907 Namespace Write Protected: No 00:08:52.907 Number of LBA Formats: 8 00:08:52.907 Current LBA Format: LBA Format #04 00:08:52.907 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.907 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.907 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.907 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.907 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.907 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.907 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.907 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.907 00:08:52.907 NVM Specific Namespace Data 00:08:52.907 =========================== 00:08:52.907 Logical Block Storage Tag Mask: 0 00:08:52.907 Protection Information Capabilities: 00:08:52.907 16b Guard Protection Information Storage Tag Support: No 00:08:52.907 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.907 Storage Tag Check Read Support: No 00:08:52.907 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Namespace ID:2 00:08:52.907 Error Recovery Timeout: Unlimited 00:08:52.907 Command Set Identifier: NVM (00h) 00:08:52.907 Deallocate: Supported 00:08:52.907 Deallocated/Unwritten Error: Supported 00:08:52.907 Deallocated Read Value: All 0x00 00:08:52.907 Deallocate in Write Zeroes: Not Supported 00:08:52.907 Deallocated Guard Field: 0xFFFF 00:08:52.907 Flush: Supported 00:08:52.907 Reservation: Not Supported 00:08:52.907 Namespace Sharing Capabilities: Private 00:08:52.907 Size (in LBAs): 1048576 (4GiB) 00:08:52.907 Capacity (in LBAs): 1048576 (4GiB) 00:08:52.907 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.907 Thin Provisioning: Not Supported 00:08:52.907 Per-NS Atomic Units: No 00:08:52.907 Maximum Single Source Range Length: 128 00:08:52.907 Maximum Copy Length: 128 00:08:52.907 Maximum Source Range Count: 128 00:08:52.907 NGUID/EUI64 Never Reused: No 00:08:52.907 Namespace Write Protected: No 00:08:52.907 Number of LBA Formats: 8 00:08:52.907 Current LBA Format: LBA Format #04 00:08:52.907 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.907 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.907 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.907 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.907 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.907 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.907 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.907 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.907 00:08:52.907 NVM Specific Namespace Data 00:08:52.907 =========================== 00:08:52.907 Logical Block Storage Tag Mask: 0 00:08:52.907 Protection Information Capabilities: 00:08:52.907 16b Guard Protection Information Storage Tag Support: No 00:08:52.907 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.907 Storage Tag Check Read Support: No 00:08:52.907 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.907 Namespace ID:3 00:08:52.907 Error Recovery Timeout: Unlimited 00:08:52.907 Command Set Identifier: NVM (00h) 00:08:52.907 Deallocate: Supported 00:08:52.907 Deallocated/Unwritten Error: Supported 00:08:52.907 Deallocated Read Value: All 0x00 00:08:52.907 Deallocate in Write Zeroes: Not Supported 00:08:52.907 Deallocated Guard Field: 0xFFFF 00:08:52.907 Flush: Supported 00:08:52.907 Reservation: Not Supported 00:08:52.907 Namespace Sharing Capabilities: Private 00:08:52.907 Size (in LBAs): 1048576 (4GiB) 00:08:52.907 Capacity (in LBAs): 1048576 (4GiB) 00:08:52.907 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.907 Thin Provisioning: Not Supported 00:08:52.907 Per-NS Atomic Units: No 00:08:52.907 Maximum Single Source Range Length: 128 00:08:52.907 Maximum Copy Length: 128 00:08:52.907 Maximum Source Range Count: 128 00:08:52.907 NGUID/EUI64 Never Reused: No 00:08:52.907 Namespace Write Protected: No 00:08:52.907 Number of LBA Formats: 8 00:08:52.907 Current LBA Format: LBA Format #04 00:08:52.907 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.907 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.907 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.907 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.907 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.907 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.908 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.908 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.908 00:08:52.908 NVM Specific Namespace Data 00:08:52.908 =========================== 00:08:52.908 Logical Block Storage Tag Mask: 0 00:08:52.908 Protection Information Capabilities: 00:08:52.908 16b Guard Protection Information Storage Tag Support: No 00:08:52.908 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.908 Storage Tag Check Read Support: No 00:08:52.908 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.908 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.908 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.908 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.908 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.908 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.908 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.908 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.908 12:12:23 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:52.908 12:12:23 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:53.167 ===================================================== 00:08:53.168 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.168 ===================================================== 00:08:53.168 Controller Capabilities/Features 00:08:53.168 ================================ 00:08:53.168 Vendor ID: 1b36 00:08:53.168 Subsystem Vendor ID: 1af4 00:08:53.168 Serial Number: 12340 00:08:53.168 Model Number: QEMU NVMe Ctrl 00:08:53.168 Firmware Version: 8.0.0 00:08:53.168 Recommended Arb Burst: 6 00:08:53.168 IEEE OUI Identifier: 00 54 52 00:08:53.168 Multi-path I/O 00:08:53.168 May have multiple subsystem ports: No 00:08:53.168 May have multiple controllers: No 00:08:53.168 Associated with SR-IOV VF: No 00:08:53.168 Max Data Transfer Size: 524288 00:08:53.168 Max Number of Namespaces: 256 00:08:53.168 Max Number of I/O Queues: 64 00:08:53.168 NVMe Specification Version (VS): 1.4 00:08:53.168 NVMe Specification Version (Identify): 1.4 00:08:53.168 Maximum Queue Entries: 2048 00:08:53.168 Contiguous Queues Required: Yes 00:08:53.168 Arbitration Mechanisms Supported 00:08:53.168 Weighted Round Robin: Not Supported 00:08:53.168 Vendor Specific: Not Supported 00:08:53.168 Reset Timeout: 7500 ms 00:08:53.168 Doorbell Stride: 4 bytes 00:08:53.168 NVM Subsystem Reset: Not Supported 00:08:53.168 Command Sets Supported 00:08:53.168 NVM Command Set: Supported 00:08:53.168 Boot Partition: Not Supported 00:08:53.168 Memory Page Size Minimum: 4096 bytes 00:08:53.168 Memory Page Size Maximum: 65536 bytes 00:08:53.168 Persistent Memory Region: Not Supported 00:08:53.168 Optional Asynchronous Events Supported 00:08:53.168 Namespace Attribute Notices: Supported 00:08:53.168 Firmware Activation Notices: Not Supported 00:08:53.168 ANA Change Notices: Not Supported 00:08:53.168 PLE Aggregate Log Change Notices: Not Supported 00:08:53.168 LBA Status Info Alert Notices: Not Supported 00:08:53.168 EGE Aggregate Log Change Notices: Not Supported 00:08:53.168 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.168 Zone Descriptor Change Notices: Not Supported 00:08:53.168 Discovery Log Change Notices: Not Supported 00:08:53.168 Controller Attributes 00:08:53.168 128-bit Host Identifier: Not Supported 00:08:53.168 Non-Operational Permissive Mode: Not Supported 00:08:53.168 NVM Sets: Not Supported 00:08:53.168 Read Recovery Levels: Not Supported 00:08:53.168 Endurance Groups: Not Supported 00:08:53.168 Predictable Latency Mode: Not Supported 00:08:53.168 Traffic Based Keep ALive: Not Supported 00:08:53.168 Namespace Granularity: Not Supported 00:08:53.168 SQ Associations: Not Supported 00:08:53.168 UUID List: Not Supported 00:08:53.168 Multi-Domain Subsystem: Not Supported 00:08:53.168 Fixed Capacity Management: Not Supported 00:08:53.168 Variable Capacity Management: Not Supported 00:08:53.168 Delete Endurance Group: Not Supported 00:08:53.168 Delete NVM Set: Not Supported 00:08:53.168 Extended LBA Formats Supported: Supported 00:08:53.168 Flexible Data Placement Supported: Not Supported 00:08:53.168 00:08:53.168 Controller Memory Buffer Support 00:08:53.168 ================================ 00:08:53.168 Supported: No 00:08:53.168 00:08:53.168 Persistent Memory Region Support 00:08:53.168 ================================ 00:08:53.168 Supported: No 00:08:53.168 00:08:53.168 Admin Command Set Attributes 00:08:53.168 ============================ 00:08:53.168 Security Send/Receive: Not Supported 00:08:53.168 Format NVM: Supported 00:08:53.168 Firmware Activate/Download: Not Supported 00:08:53.168 Namespace Management: Supported 00:08:53.168 Device Self-Test: Not Supported 00:08:53.168 Directives: Supported 00:08:53.168 NVMe-MI: Not Supported 00:08:53.168 Virtualization Management: Not Supported 00:08:53.168 Doorbell Buffer Config: Supported 00:08:53.168 Get LBA Status Capability: Not Supported 00:08:53.168 Command & Feature Lockdown Capability: Not Supported 00:08:53.168 Abort Command Limit: 4 00:08:53.168 Async Event Request Limit: 4 00:08:53.168 Number of Firmware Slots: N/A 00:08:53.168 Firmware Slot 1 Read-Only: N/A 00:08:53.168 Firmware Activation Without Reset: N/A 00:08:53.168 Multiple Update Detection Support: N/A 00:08:53.168 Firmware Update Granularity: No Information Provided 00:08:53.168 Per-Namespace SMART Log: Yes 00:08:53.168 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.168 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:53.168 Command Effects Log Page: Supported 00:08:53.168 Get Log Page Extended Data: Supported 00:08:53.168 Telemetry Log Pages: Not Supported 00:08:53.168 Persistent Event Log Pages: Not Supported 00:08:53.168 Supported Log Pages Log Page: May Support 00:08:53.168 Commands Supported & Effects Log Page: Not Supported 00:08:53.168 Feature Identifiers & Effects Log Page:May Support 00:08:53.168 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.168 Data Area 4 for Telemetry Log: Not Supported 00:08:53.168 Error Log Page Entries Supported: 1 00:08:53.168 Keep Alive: Not Supported 00:08:53.168 00:08:53.168 NVM Command Set Attributes 00:08:53.168 ========================== 00:08:53.168 Submission Queue Entry Size 00:08:53.168 Max: 64 00:08:53.168 Min: 64 00:08:53.168 Completion Queue Entry Size 00:08:53.168 Max: 16 00:08:53.168 Min: 16 00:08:53.168 Number of Namespaces: 256 00:08:53.168 Compare Command: Supported 00:08:53.168 Write Uncorrectable Command: Not Supported 00:08:53.168 Dataset Management Command: Supported 00:08:53.168 Write Zeroes Command: Supported 00:08:53.168 Set Features Save Field: Supported 00:08:53.168 Reservations: Not Supported 00:08:53.168 Timestamp: Supported 00:08:53.168 Copy: Supported 00:08:53.168 Volatile Write Cache: Present 00:08:53.168 Atomic Write Unit (Normal): 1 00:08:53.168 Atomic Write Unit (PFail): 1 00:08:53.168 Atomic Compare & Write Unit: 1 00:08:53.168 Fused Compare & Write: Not Supported 00:08:53.168 Scatter-Gather List 00:08:53.168 SGL Command Set: Supported 00:08:53.168 SGL Keyed: Not Supported 00:08:53.168 SGL Bit Bucket Descriptor: Not Supported 00:08:53.168 SGL Metadata Pointer: Not Supported 00:08:53.168 Oversized SGL: Not Supported 00:08:53.168 SGL Metadata Address: Not Supported 00:08:53.168 SGL Offset: Not Supported 00:08:53.168 Transport SGL Data Block: Not Supported 00:08:53.168 Replay Protected Memory Block: Not Supported 00:08:53.168 00:08:53.168 Firmware Slot Information 00:08:53.168 ========================= 00:08:53.168 Active slot: 1 00:08:53.168 Slot 1 Firmware Revision: 1.0 00:08:53.168 00:08:53.168 00:08:53.168 Commands Supported and Effects 00:08:53.168 ============================== 00:08:53.168 Admin Commands 00:08:53.168 -------------- 00:08:53.168 Delete I/O Submission Queue (00h): Supported 00:08:53.168 Create I/O Submission Queue (01h): Supported 00:08:53.168 Get Log Page (02h): Supported 00:08:53.168 Delete I/O Completion Queue (04h): Supported 00:08:53.168 Create I/O Completion Queue (05h): Supported 00:08:53.168 Identify (06h): Supported 00:08:53.168 Abort (08h): Supported 00:08:53.168 Set Features (09h): Supported 00:08:53.168 Get Features (0Ah): Supported 00:08:53.168 Asynchronous Event Request (0Ch): Supported 00:08:53.168 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.168 Directive Send (19h): Supported 00:08:53.168 Directive Receive (1Ah): Supported 00:08:53.168 Virtualization Management (1Ch): Supported 00:08:53.168 Doorbell Buffer Config (7Ch): Supported 00:08:53.168 Format NVM (80h): Supported LBA-Change 00:08:53.168 I/O Commands 00:08:53.168 ------------ 00:08:53.168 Flush (00h): Supported LBA-Change 00:08:53.168 Write (01h): Supported LBA-Change 00:08:53.168 Read (02h): Supported 00:08:53.168 Compare (05h): Supported 00:08:53.168 Write Zeroes (08h): Supported LBA-Change 00:08:53.168 Dataset Management (09h): Supported LBA-Change 00:08:53.168 Unknown (0Ch): Supported 00:08:53.168 Unknown (12h): Supported 00:08:53.168 Copy (19h): Supported LBA-Change 00:08:53.168 Unknown (1Dh): Supported LBA-Change 00:08:53.168 00:08:53.168 Error Log 00:08:53.168 ========= 00:08:53.168 00:08:53.168 Arbitration 00:08:53.168 =========== 00:08:53.168 Arbitration Burst: no limit 00:08:53.168 00:08:53.168 Power Management 00:08:53.168 ================ 00:08:53.168 Number of Power States: 1 00:08:53.168 Current Power State: Power State #0 00:08:53.168 Power State #0: 00:08:53.168 Max Power: 25.00 W 00:08:53.168 Non-Operational State: Operational 00:08:53.168 Entry Latency: 16 microseconds 00:08:53.168 Exit Latency: 4 microseconds 00:08:53.168 Relative Read Throughput: 0 00:08:53.168 Relative Read Latency: 0 00:08:53.168 Relative Write Throughput: 0 00:08:53.168 Relative Write Latency: 0 00:08:53.168 Idle Power: Not Reported 00:08:53.168 Active Power: Not Reported 00:08:53.168 Non-Operational Permissive Mode: Not Supported 00:08:53.168 00:08:53.168 Health Information 00:08:53.168 ================== 00:08:53.168 Critical Warnings: 00:08:53.168 Available Spare Space: OK 00:08:53.168 Temperature: OK 00:08:53.168 Device Reliability: OK 00:08:53.168 Read Only: No 00:08:53.168 Volatile Memory Backup: OK 00:08:53.168 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.168 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.168 Available Spare: 0% 00:08:53.168 Available Spare Threshold: 0% 00:08:53.168 Life Percentage Used: 0% 00:08:53.168 Data Units Read: 600 00:08:53.168 Data Units Written: 528 00:08:53.168 Host Read Commands: 34900 00:08:53.168 Host Write Commands: 34686 00:08:53.168 Controller Busy Time: 0 minutes 00:08:53.168 Power Cycles: 0 00:08:53.168 Power On Hours: 0 hours 00:08:53.168 Unsafe Shutdowns: 0 00:08:53.168 Unrecoverable Media Errors: 0 00:08:53.168 Lifetime Error Log Entries: 0 00:08:53.168 Warning Temperature Time: 0 minutes 00:08:53.168 Critical Temperature Time: 0 minutes 00:08:53.168 00:08:53.168 Number of Queues 00:08:53.168 ================ 00:08:53.168 Number of I/O Submission Queues: 64 00:08:53.168 Number of I/O Completion Queues: 64 00:08:53.168 00:08:53.168 ZNS Specific Controller Data 00:08:53.168 ============================ 00:08:53.168 Zone Append Size Limit: 0 00:08:53.168 00:08:53.168 00:08:53.168 Active Namespaces 00:08:53.168 ================= 00:08:53.168 Namespace ID:1 00:08:53.168 Error Recovery Timeout: Unlimited 00:08:53.168 Command Set Identifier: NVM (00h) 00:08:53.168 Deallocate: Supported 00:08:53.168 Deallocated/Unwritten Error: Supported 00:08:53.168 Deallocated Read Value: All 0x00 00:08:53.168 Deallocate in Write Zeroes: Not Supported 00:08:53.168 Deallocated Guard Field: 0xFFFF 00:08:53.168 Flush: Supported 00:08:53.168 Reservation: Not Supported 00:08:53.168 Metadata Transferred as: Separate Metadata Buffer 00:08:53.168 Namespace Sharing Capabilities: Private 00:08:53.168 Size (in LBAs): 1548666 (5GiB) 00:08:53.168 Capacity (in LBAs): 1548666 (5GiB) 00:08:53.168 Utilization (in LBAs): 1548666 (5GiB) 00:08:53.168 Thin Provisioning: Not Supported 00:08:53.168 Per-NS Atomic Units: No 00:08:53.168 Maximum Single Source Range Length: 128 00:08:53.168 Maximum Copy Length: 128 00:08:53.168 Maximum Source Range Count: 128 00:08:53.168 NGUID/EUI64 Never Reused: No 00:08:53.168 Namespace Write Protected: No 00:08:53.168 Number of LBA Formats: 8 00:08:53.168 Current LBA Format: LBA Format #07 00:08:53.168 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.168 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.168 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.168 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.168 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.168 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.168 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.168 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.168 00:08:53.168 NVM Specific Namespace Data 00:08:53.168 =========================== 00:08:53.168 Logical Block Storage Tag Mask: 0 00:08:53.168 Protection Information Capabilities: 00:08:53.168 16b Guard Protection Information Storage Tag Support: No 00:08:53.168 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.168 Storage Tag Check Read Support: No 00:08:53.168 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.168 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.168 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.168 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.168 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.168 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.168 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.169 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.169 12:12:23 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:53.169 12:12:23 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:53.427 ===================================================== 00:08:53.427 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.427 ===================================================== 00:08:53.427 Controller Capabilities/Features 00:08:53.427 ================================ 00:08:53.427 Vendor ID: 1b36 00:08:53.427 Subsystem Vendor ID: 1af4 00:08:53.427 Serial Number: 12341 00:08:53.427 Model Number: QEMU NVMe Ctrl 00:08:53.427 Firmware Version: 8.0.0 00:08:53.427 Recommended Arb Burst: 6 00:08:53.427 IEEE OUI Identifier: 00 54 52 00:08:53.427 Multi-path I/O 00:08:53.427 May have multiple subsystem ports: No 00:08:53.427 May have multiple controllers: No 00:08:53.427 Associated with SR-IOV VF: No 00:08:53.427 Max Data Transfer Size: 524288 00:08:53.427 Max Number of Namespaces: 256 00:08:53.427 Max Number of I/O Queues: 64 00:08:53.427 NVMe Specification Version (VS): 1.4 00:08:53.427 NVMe Specification Version (Identify): 1.4 00:08:53.427 Maximum Queue Entries: 2048 00:08:53.427 Contiguous Queues Required: Yes 00:08:53.427 Arbitration Mechanisms Supported 00:08:53.427 Weighted Round Robin: Not Supported 00:08:53.427 Vendor Specific: Not Supported 00:08:53.427 Reset Timeout: 7500 ms 00:08:53.427 Doorbell Stride: 4 bytes 00:08:53.427 NVM Subsystem Reset: Not Supported 00:08:53.427 Command Sets Supported 00:08:53.427 NVM Command Set: Supported 00:08:53.427 Boot Partition: Not Supported 00:08:53.427 Memory Page Size Minimum: 4096 bytes 00:08:53.427 Memory Page Size Maximum: 65536 bytes 00:08:53.427 Persistent Memory Region: Not Supported 00:08:53.427 Optional Asynchronous Events Supported 00:08:53.427 Namespace Attribute Notices: Supported 00:08:53.427 Firmware Activation Notices: Not Supported 00:08:53.427 ANA Change Notices: Not Supported 00:08:53.427 PLE Aggregate Log Change Notices: Not Supported 00:08:53.427 LBA Status Info Alert Notices: Not Supported 00:08:53.427 EGE Aggregate Log Change Notices: Not Supported 00:08:53.427 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.427 Zone Descriptor Change Notices: Not Supported 00:08:53.427 Discovery Log Change Notices: Not Supported 00:08:53.427 Controller Attributes 00:08:53.427 128-bit Host Identifier: Not Supported 00:08:53.427 Non-Operational Permissive Mode: Not Supported 00:08:53.427 NVM Sets: Not Supported 00:08:53.427 Read Recovery Levels: Not Supported 00:08:53.427 Endurance Groups: Not Supported 00:08:53.427 Predictable Latency Mode: Not Supported 00:08:53.427 Traffic Based Keep ALive: Not Supported 00:08:53.427 Namespace Granularity: Not Supported 00:08:53.427 SQ Associations: Not Supported 00:08:53.427 UUID List: Not Supported 00:08:53.427 Multi-Domain Subsystem: Not Supported 00:08:53.427 Fixed Capacity Management: Not Supported 00:08:53.427 Variable Capacity Management: Not Supported 00:08:53.428 Delete Endurance Group: Not Supported 00:08:53.428 Delete NVM Set: Not Supported 00:08:53.428 Extended LBA Formats Supported: Supported 00:08:53.428 Flexible Data Placement Supported: Not Supported 00:08:53.428 00:08:53.428 Controller Memory Buffer Support 00:08:53.428 ================================ 00:08:53.428 Supported: No 00:08:53.428 00:08:53.428 Persistent Memory Region Support 00:08:53.428 ================================ 00:08:53.428 Supported: No 00:08:53.428 00:08:53.428 Admin Command Set Attributes 00:08:53.428 ============================ 00:08:53.428 Security Send/Receive: Not Supported 00:08:53.428 Format NVM: Supported 00:08:53.428 Firmware Activate/Download: Not Supported 00:08:53.428 Namespace Management: Supported 00:08:53.428 Device Self-Test: Not Supported 00:08:53.428 Directives: Supported 00:08:53.428 NVMe-MI: Not Supported 00:08:53.428 Virtualization Management: Not Supported 00:08:53.428 Doorbell Buffer Config: Supported 00:08:53.428 Get LBA Status Capability: Not Supported 00:08:53.428 Command & Feature Lockdown Capability: Not Supported 00:08:53.428 Abort Command Limit: 4 00:08:53.428 Async Event Request Limit: 4 00:08:53.428 Number of Firmware Slots: N/A 00:08:53.428 Firmware Slot 1 Read-Only: N/A 00:08:53.428 Firmware Activation Without Reset: N/A 00:08:53.428 Multiple Update Detection Support: N/A 00:08:53.428 Firmware Update Granularity: No Information Provided 00:08:53.428 Per-Namespace SMART Log: Yes 00:08:53.428 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.428 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:53.428 Command Effects Log Page: Supported 00:08:53.428 Get Log Page Extended Data: Supported 00:08:53.428 Telemetry Log Pages: Not Supported 00:08:53.428 Persistent Event Log Pages: Not Supported 00:08:53.428 Supported Log Pages Log Page: May Support 00:08:53.428 Commands Supported & Effects Log Page: Not Supported 00:08:53.428 Feature Identifiers & Effects Log Page:May Support 00:08:53.428 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.428 Data Area 4 for Telemetry Log: Not Supported 00:08:53.428 Error Log Page Entries Supported: 1 00:08:53.428 Keep Alive: Not Supported 00:08:53.428 00:08:53.428 NVM Command Set Attributes 00:08:53.428 ========================== 00:08:53.428 Submission Queue Entry Size 00:08:53.428 Max: 64 00:08:53.428 Min: 64 00:08:53.428 Completion Queue Entry Size 00:08:53.428 Max: 16 00:08:53.428 Min: 16 00:08:53.428 Number of Namespaces: 256 00:08:53.428 Compare Command: Supported 00:08:53.428 Write Uncorrectable Command: Not Supported 00:08:53.428 Dataset Management Command: Supported 00:08:53.428 Write Zeroes Command: Supported 00:08:53.428 Set Features Save Field: Supported 00:08:53.428 Reservations: Not Supported 00:08:53.428 Timestamp: Supported 00:08:53.428 Copy: Supported 00:08:53.428 Volatile Write Cache: Present 00:08:53.428 Atomic Write Unit (Normal): 1 00:08:53.428 Atomic Write Unit (PFail): 1 00:08:53.428 Atomic Compare & Write Unit: 1 00:08:53.428 Fused Compare & Write: Not Supported 00:08:53.428 Scatter-Gather List 00:08:53.428 SGL Command Set: Supported 00:08:53.428 SGL Keyed: Not Supported 00:08:53.428 SGL Bit Bucket Descriptor: Not Supported 00:08:53.428 SGL Metadata Pointer: Not Supported 00:08:53.428 Oversized SGL: Not Supported 00:08:53.428 SGL Metadata Address: Not Supported 00:08:53.428 SGL Offset: Not Supported 00:08:53.428 Transport SGL Data Block: Not Supported 00:08:53.428 Replay Protected Memory Block: Not Supported 00:08:53.428 00:08:53.428 Firmware Slot Information 00:08:53.428 ========================= 00:08:53.428 Active slot: 1 00:08:53.428 Slot 1 Firmware Revision: 1.0 00:08:53.428 00:08:53.428 00:08:53.428 Commands Supported and Effects 00:08:53.428 ============================== 00:08:53.428 Admin Commands 00:08:53.428 -------------- 00:08:53.428 Delete I/O Submission Queue (00h): Supported 00:08:53.428 Create I/O Submission Queue (01h): Supported 00:08:53.428 Get Log Page (02h): Supported 00:08:53.428 Delete I/O Completion Queue (04h): Supported 00:08:53.428 Create I/O Completion Queue (05h): Supported 00:08:53.428 Identify (06h): Supported 00:08:53.428 Abort (08h): Supported 00:08:53.428 Set Features (09h): Supported 00:08:53.428 Get Features (0Ah): Supported 00:08:53.428 Asynchronous Event Request (0Ch): Supported 00:08:53.428 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.428 Directive Send (19h): Supported 00:08:53.428 Directive Receive (1Ah): Supported 00:08:53.428 Virtualization Management (1Ch): Supported 00:08:53.428 Doorbell Buffer Config (7Ch): Supported 00:08:53.428 Format NVM (80h): Supported LBA-Change 00:08:53.428 I/O Commands 00:08:53.428 ------------ 00:08:53.428 Flush (00h): Supported LBA-Change 00:08:53.428 Write (01h): Supported LBA-Change 00:08:53.428 Read (02h): Supported 00:08:53.428 Compare (05h): Supported 00:08:53.428 Write Zeroes (08h): Supported LBA-Change 00:08:53.428 Dataset Management (09h): Supported LBA-Change 00:08:53.428 Unknown (0Ch): Supported 00:08:53.428 Unknown (12h): Supported 00:08:53.428 Copy (19h): Supported LBA-Change 00:08:53.428 Unknown (1Dh): Supported LBA-Change 00:08:53.428 00:08:53.428 Error Log 00:08:53.428 ========= 00:08:53.428 00:08:53.428 Arbitration 00:08:53.428 =========== 00:08:53.428 Arbitration Burst: no limit 00:08:53.428 00:08:53.428 Power Management 00:08:53.428 ================ 00:08:53.428 Number of Power States: 1 00:08:53.428 Current Power State: Power State #0 00:08:53.428 Power State #0: 00:08:53.428 Max Power: 25.00 W 00:08:53.428 Non-Operational State: Operational 00:08:53.428 Entry Latency: 16 microseconds 00:08:53.428 Exit Latency: 4 microseconds 00:08:53.428 Relative Read Throughput: 0 00:08:53.428 Relative Read Latency: 0 00:08:53.428 Relative Write Throughput: 0 00:08:53.428 Relative Write Latency: 0 00:08:53.428 Idle Power: Not Reported 00:08:53.428 Active Power: Not Reported 00:08:53.428 Non-Operational Permissive Mode: Not Supported 00:08:53.428 00:08:53.428 Health Information 00:08:53.428 ================== 00:08:53.428 Critical Warnings: 00:08:53.428 Available Spare Space: OK 00:08:53.428 Temperature: OK 00:08:53.428 Device Reliability: OK 00:08:53.428 Read Only: No 00:08:53.428 Volatile Memory Backup: OK 00:08:53.428 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.428 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.428 Available Spare: 0% 00:08:53.428 Available Spare Threshold: 0% 00:08:53.428 Life Percentage Used: 0% 00:08:53.428 Data Units Read: 973 00:08:53.428 Data Units Written: 847 00:08:53.428 Host Read Commands: 51773 00:08:53.428 Host Write Commands: 50667 00:08:53.428 Controller Busy Time: 0 minutes 00:08:53.428 Power Cycles: 0 00:08:53.428 Power On Hours: 0 hours 00:08:53.428 Unsafe Shutdowns: 0 00:08:53.428 Unrecoverable Media Errors: 0 00:08:53.428 Lifetime Error Log Entries: 0 00:08:53.428 Warning Temperature Time: 0 minutes 00:08:53.428 Critical Temperature Time: 0 minutes 00:08:53.428 00:08:53.428 Number of Queues 00:08:53.428 ================ 00:08:53.428 Number of I/O Submission Queues: 64 00:08:53.428 Number of I/O Completion Queues: 64 00:08:53.428 00:08:53.428 ZNS Specific Controller Data 00:08:53.428 ============================ 00:08:53.428 Zone Append Size Limit: 0 00:08:53.428 00:08:53.428 00:08:53.428 Active Namespaces 00:08:53.428 ================= 00:08:53.428 Namespace ID:1 00:08:53.428 Error Recovery Timeout: Unlimited 00:08:53.428 Command Set Identifier: NVM (00h) 00:08:53.428 Deallocate: Supported 00:08:53.428 Deallocated/Unwritten Error: Supported 00:08:53.428 Deallocated Read Value: All 0x00 00:08:53.428 Deallocate in Write Zeroes: Not Supported 00:08:53.428 Deallocated Guard Field: 0xFFFF 00:08:53.428 Flush: Supported 00:08:53.428 Reservation: Not Supported 00:08:53.428 Namespace Sharing Capabilities: Private 00:08:53.428 Size (in LBAs): 1310720 (5GiB) 00:08:53.428 Capacity (in LBAs): 1310720 (5GiB) 00:08:53.428 Utilization (in LBAs): 1310720 (5GiB) 00:08:53.428 Thin Provisioning: Not Supported 00:08:53.428 Per-NS Atomic Units: No 00:08:53.428 Maximum Single Source Range Length: 128 00:08:53.428 Maximum Copy Length: 128 00:08:53.428 Maximum Source Range Count: 128 00:08:53.428 NGUID/EUI64 Never Reused: No 00:08:53.428 Namespace Write Protected: No 00:08:53.428 Number of LBA Formats: 8 00:08:53.428 Current LBA Format: LBA Format #04 00:08:53.428 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.428 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.428 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.428 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.429 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.429 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.429 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.429 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.429 00:08:53.429 NVM Specific Namespace Data 00:08:53.429 =========================== 00:08:53.429 Logical Block Storage Tag Mask: 0 00:08:53.429 Protection Information Capabilities: 00:08:53.429 16b Guard Protection Information Storage Tag Support: No 00:08:53.429 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.429 Storage Tag Check Read Support: No 00:08:53.429 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.429 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.429 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.429 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.429 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.429 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.429 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.429 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.429 12:12:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:53.429 12:12:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:53.688 ===================================================== 00:08:53.688 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.688 ===================================================== 00:08:53.688 Controller Capabilities/Features 00:08:53.688 ================================ 00:08:53.688 Vendor ID: 1b36 00:08:53.688 Subsystem Vendor ID: 1af4 00:08:53.688 Serial Number: 12342 00:08:53.688 Model Number: QEMU NVMe Ctrl 00:08:53.688 Firmware Version: 8.0.0 00:08:53.688 Recommended Arb Burst: 6 00:08:53.688 IEEE OUI Identifier: 00 54 52 00:08:53.688 Multi-path I/O 00:08:53.688 May have multiple subsystem ports: No 00:08:53.688 May have multiple controllers: No 00:08:53.688 Associated with SR-IOV VF: No 00:08:53.688 Max Data Transfer Size: 524288 00:08:53.688 Max Number of Namespaces: 256 00:08:53.688 Max Number of I/O Queues: 64 00:08:53.688 NVMe Specification Version (VS): 1.4 00:08:53.688 NVMe Specification Version (Identify): 1.4 00:08:53.688 Maximum Queue Entries: 2048 00:08:53.688 Contiguous Queues Required: Yes 00:08:53.688 Arbitration Mechanisms Supported 00:08:53.688 Weighted Round Robin: Not Supported 00:08:53.688 Vendor Specific: Not Supported 00:08:53.688 Reset Timeout: 7500 ms 00:08:53.688 Doorbell Stride: 4 bytes 00:08:53.688 NVM Subsystem Reset: Not Supported 00:08:53.688 Command Sets Supported 00:08:53.688 NVM Command Set: Supported 00:08:53.688 Boot Partition: Not Supported 00:08:53.688 Memory Page Size Minimum: 4096 bytes 00:08:53.688 Memory Page Size Maximum: 65536 bytes 00:08:53.688 Persistent Memory Region: Not Supported 00:08:53.688 Optional Asynchronous Events Supported 00:08:53.688 Namespace Attribute Notices: Supported 00:08:53.688 Firmware Activation Notices: Not Supported 00:08:53.688 ANA Change Notices: Not Supported 00:08:53.688 PLE Aggregate Log Change Notices: Not Supported 00:08:53.688 LBA Status Info Alert Notices: Not Supported 00:08:53.688 EGE Aggregate Log Change Notices: Not Supported 00:08:53.688 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.688 Zone Descriptor Change Notices: Not Supported 00:08:53.688 Discovery Log Change Notices: Not Supported 00:08:53.688 Controller Attributes 00:08:53.688 128-bit Host Identifier: Not Supported 00:08:53.688 Non-Operational Permissive Mode: Not Supported 00:08:53.688 NVM Sets: Not Supported 00:08:53.688 Read Recovery Levels: Not Supported 00:08:53.688 Endurance Groups: Not Supported 00:08:53.688 Predictable Latency Mode: Not Supported 00:08:53.688 Traffic Based Keep ALive: Not Supported 00:08:53.688 Namespace Granularity: Not Supported 00:08:53.688 SQ Associations: Not Supported 00:08:53.688 UUID List: Not Supported 00:08:53.688 Multi-Domain Subsystem: Not Supported 00:08:53.688 Fixed Capacity Management: Not Supported 00:08:53.688 Variable Capacity Management: Not Supported 00:08:53.688 Delete Endurance Group: Not Supported 00:08:53.688 Delete NVM Set: Not Supported 00:08:53.688 Extended LBA Formats Supported: Supported 00:08:53.688 Flexible Data Placement Supported: Not Supported 00:08:53.688 00:08:53.688 Controller Memory Buffer Support 00:08:53.688 ================================ 00:08:53.688 Supported: No 00:08:53.688 00:08:53.688 Persistent Memory Region Support 00:08:53.688 ================================ 00:08:53.688 Supported: No 00:08:53.688 00:08:53.688 Admin Command Set Attributes 00:08:53.688 ============================ 00:08:53.688 Security Send/Receive: Not Supported 00:08:53.688 Format NVM: Supported 00:08:53.688 Firmware Activate/Download: Not Supported 00:08:53.688 Namespace Management: Supported 00:08:53.688 Device Self-Test: Not Supported 00:08:53.688 Directives: Supported 00:08:53.688 NVMe-MI: Not Supported 00:08:53.688 Virtualization Management: Not Supported 00:08:53.688 Doorbell Buffer Config: Supported 00:08:53.688 Get LBA Status Capability: Not Supported 00:08:53.688 Command & Feature Lockdown Capability: Not Supported 00:08:53.688 Abort Command Limit: 4 00:08:53.688 Async Event Request Limit: 4 00:08:53.688 Number of Firmware Slots: N/A 00:08:53.688 Firmware Slot 1 Read-Only: N/A 00:08:53.688 Firmware Activation Without Reset: N/A 00:08:53.688 Multiple Update Detection Support: N/A 00:08:53.688 Firmware Update Granularity: No Information Provided 00:08:53.688 Per-Namespace SMART Log: Yes 00:08:53.688 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.688 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:53.688 Command Effects Log Page: Supported 00:08:53.688 Get Log Page Extended Data: Supported 00:08:53.688 Telemetry Log Pages: Not Supported 00:08:53.688 Persistent Event Log Pages: Not Supported 00:08:53.688 Supported Log Pages Log Page: May Support 00:08:53.688 Commands Supported & Effects Log Page: Not Supported 00:08:53.688 Feature Identifiers & Effects Log Page:May Support 00:08:53.688 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.688 Data Area 4 for Telemetry Log: Not Supported 00:08:53.688 Error Log Page Entries Supported: 1 00:08:53.688 Keep Alive: Not Supported 00:08:53.688 00:08:53.688 NVM Command Set Attributes 00:08:53.688 ========================== 00:08:53.688 Submission Queue Entry Size 00:08:53.688 Max: 64 00:08:53.688 Min: 64 00:08:53.688 Completion Queue Entry Size 00:08:53.688 Max: 16 00:08:53.688 Min: 16 00:08:53.688 Number of Namespaces: 256 00:08:53.688 Compare Command: Supported 00:08:53.688 Write Uncorrectable Command: Not Supported 00:08:53.688 Dataset Management Command: Supported 00:08:53.688 Write Zeroes Command: Supported 00:08:53.688 Set Features Save Field: Supported 00:08:53.688 Reservations: Not Supported 00:08:53.688 Timestamp: Supported 00:08:53.688 Copy: Supported 00:08:53.688 Volatile Write Cache: Present 00:08:53.688 Atomic Write Unit (Normal): 1 00:08:53.688 Atomic Write Unit (PFail): 1 00:08:53.688 Atomic Compare & Write Unit: 1 00:08:53.688 Fused Compare & Write: Not Supported 00:08:53.688 Scatter-Gather List 00:08:53.688 SGL Command Set: Supported 00:08:53.688 SGL Keyed: Not Supported 00:08:53.688 SGL Bit Bucket Descriptor: Not Supported 00:08:53.688 SGL Metadata Pointer: Not Supported 00:08:53.688 Oversized SGL: Not Supported 00:08:53.688 SGL Metadata Address: Not Supported 00:08:53.688 SGL Offset: Not Supported 00:08:53.688 Transport SGL Data Block: Not Supported 00:08:53.688 Replay Protected Memory Block: Not Supported 00:08:53.688 00:08:53.688 Firmware Slot Information 00:08:53.688 ========================= 00:08:53.688 Active slot: 1 00:08:53.688 Slot 1 Firmware Revision: 1.0 00:08:53.688 00:08:53.688 00:08:53.688 Commands Supported and Effects 00:08:53.688 ============================== 00:08:53.688 Admin Commands 00:08:53.688 -------------- 00:08:53.688 Delete I/O Submission Queue (00h): Supported 00:08:53.688 Create I/O Submission Queue (01h): Supported 00:08:53.688 Get Log Page (02h): Supported 00:08:53.688 Delete I/O Completion Queue (04h): Supported 00:08:53.688 Create I/O Completion Queue (05h): Supported 00:08:53.688 Identify (06h): Supported 00:08:53.688 Abort (08h): Supported 00:08:53.688 Set Features (09h): Supported 00:08:53.688 Get Features (0Ah): Supported 00:08:53.688 Asynchronous Event Request (0Ch): Supported 00:08:53.688 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.688 Directive Send (19h): Supported 00:08:53.688 Directive Receive (1Ah): Supported 00:08:53.688 Virtualization Management (1Ch): Supported 00:08:53.688 Doorbell Buffer Config (7Ch): Supported 00:08:53.688 Format NVM (80h): Supported LBA-Change 00:08:53.688 I/O Commands 00:08:53.688 ------------ 00:08:53.688 Flush (00h): Supported LBA-Change 00:08:53.688 Write (01h): Supported LBA-Change 00:08:53.688 Read (02h): Supported 00:08:53.688 Compare (05h): Supported 00:08:53.688 Write Zeroes (08h): Supported LBA-Change 00:08:53.688 Dataset Management (09h): Supported LBA-Change 00:08:53.688 Unknown (0Ch): Supported 00:08:53.688 Unknown (12h): Supported 00:08:53.688 Copy (19h): Supported LBA-Change 00:08:53.688 Unknown (1Dh): Supported LBA-Change 00:08:53.688 00:08:53.688 Error Log 00:08:53.688 ========= 00:08:53.688 00:08:53.689 Arbitration 00:08:53.689 =========== 00:08:53.689 Arbitration Burst: no limit 00:08:53.689 00:08:53.689 Power Management 00:08:53.689 ================ 00:08:53.689 Number of Power States: 1 00:08:53.689 Current Power State: Power State #0 00:08:53.689 Power State #0: 00:08:53.689 Max Power: 25.00 W 00:08:53.689 Non-Operational State: Operational 00:08:53.689 Entry Latency: 16 microseconds 00:08:53.689 Exit Latency: 4 microseconds 00:08:53.689 Relative Read Throughput: 0 00:08:53.689 Relative Read Latency: 0 00:08:53.689 Relative Write Throughput: 0 00:08:53.689 Relative Write Latency: 0 00:08:53.689 Idle Power: Not Reported 00:08:53.689 Active Power: Not Reported 00:08:53.689 Non-Operational Permissive Mode: Not Supported 00:08:53.689 00:08:53.689 Health Information 00:08:53.689 ================== 00:08:53.689 Critical Warnings: 00:08:53.689 Available Spare Space: OK 00:08:53.689 Temperature: OK 00:08:53.689 Device Reliability: OK 00:08:53.689 Read Only: No 00:08:53.689 Volatile Memory Backup: OK 00:08:53.689 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.689 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.689 Available Spare: 0% 00:08:53.689 Available Spare Threshold: 0% 00:08:53.689 Life Percentage Used: 0% 00:08:53.689 Data Units Read: 2063 00:08:53.689 Data Units Written: 1850 00:08:53.689 Host Read Commands: 107823 00:08:53.689 Host Write Commands: 106092 00:08:53.689 Controller Busy Time: 0 minutes 00:08:53.689 Power Cycles: 0 00:08:53.689 Power On Hours: 0 hours 00:08:53.689 Unsafe Shutdowns: 0 00:08:53.689 Unrecoverable Media Errors: 0 00:08:53.689 Lifetime Error Log Entries: 0 00:08:53.689 Warning Temperature Time: 0 minutes 00:08:53.689 Critical Temperature Time: 0 minutes 00:08:53.689 00:08:53.689 Number of Queues 00:08:53.689 ================ 00:08:53.689 Number of I/O Submission Queues: 64 00:08:53.689 Number of I/O Completion Queues: 64 00:08:53.689 00:08:53.689 ZNS Specific Controller Data 00:08:53.689 ============================ 00:08:53.689 Zone Append Size Limit: 0 00:08:53.689 00:08:53.689 00:08:53.689 Active Namespaces 00:08:53.689 ================= 00:08:53.689 Namespace ID:1 00:08:53.689 Error Recovery Timeout: Unlimited 00:08:53.689 Command Set Identifier: NVM (00h) 00:08:53.689 Deallocate: Supported 00:08:53.689 Deallocated/Unwritten Error: Supported 00:08:53.689 Deallocated Read Value: All 0x00 00:08:53.689 Deallocate in Write Zeroes: Not Supported 00:08:53.689 Deallocated Guard Field: 0xFFFF 00:08:53.689 Flush: Supported 00:08:53.689 Reservation: Not Supported 00:08:53.689 Namespace Sharing Capabilities: Private 00:08:53.689 Size (in LBAs): 1048576 (4GiB) 00:08:53.689 Capacity (in LBAs): 1048576 (4GiB) 00:08:53.689 Utilization (in LBAs): 1048576 (4GiB) 00:08:53.689 Thin Provisioning: Not Supported 00:08:53.689 Per-NS Atomic Units: No 00:08:53.689 Maximum Single Source Range Length: 128 00:08:53.689 Maximum Copy Length: 128 00:08:53.689 Maximum Source Range Count: 128 00:08:53.689 NGUID/EUI64 Never Reused: No 00:08:53.689 Namespace Write Protected: No 00:08:53.689 Number of LBA Formats: 8 00:08:53.689 Current LBA Format: LBA Format #04 00:08:53.689 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.689 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.689 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.689 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.689 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.689 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.689 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.689 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.689 00:08:53.689 NVM Specific Namespace Data 00:08:53.689 =========================== 00:08:53.689 Logical Block Storage Tag Mask: 0 00:08:53.689 Protection Information Capabilities: 00:08:53.689 16b Guard Protection Information Storage Tag Support: No 00:08:53.689 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.689 Storage Tag Check Read Support: No 00:08:53.689 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Namespace ID:2 00:08:53.689 Error Recovery Timeout: Unlimited 00:08:53.689 Command Set Identifier: NVM (00h) 00:08:53.689 Deallocate: Supported 00:08:53.689 Deallocated/Unwritten Error: Supported 00:08:53.689 Deallocated Read Value: All 0x00 00:08:53.689 Deallocate in Write Zeroes: Not Supported 00:08:53.689 Deallocated Guard Field: 0xFFFF 00:08:53.689 Flush: Supported 00:08:53.689 Reservation: Not Supported 00:08:53.689 Namespace Sharing Capabilities: Private 00:08:53.689 Size (in LBAs): 1048576 (4GiB) 00:08:53.689 Capacity (in LBAs): 1048576 (4GiB) 00:08:53.689 Utilization (in LBAs): 1048576 (4GiB) 00:08:53.689 Thin Provisioning: Not Supported 00:08:53.689 Per-NS Atomic Units: No 00:08:53.689 Maximum Single Source Range Length: 128 00:08:53.689 Maximum Copy Length: 128 00:08:53.689 Maximum Source Range Count: 128 00:08:53.689 NGUID/EUI64 Never Reused: No 00:08:53.689 Namespace Write Protected: No 00:08:53.689 Number of LBA Formats: 8 00:08:53.689 Current LBA Format: LBA Format #04 00:08:53.689 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.689 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.689 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.689 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.689 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.689 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.689 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.689 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.689 00:08:53.689 NVM Specific Namespace Data 00:08:53.689 =========================== 00:08:53.689 Logical Block Storage Tag Mask: 0 00:08:53.689 Protection Information Capabilities: 00:08:53.689 16b Guard Protection Information Storage Tag Support: No 00:08:53.689 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.689 Storage Tag Check Read Support: No 00:08:53.689 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.689 Namespace ID:3 00:08:53.689 Error Recovery Timeout: Unlimited 00:08:53.689 Command Set Identifier: NVM (00h) 00:08:53.689 Deallocate: Supported 00:08:53.689 Deallocated/Unwritten Error: Supported 00:08:53.689 Deallocated Read Value: All 0x00 00:08:53.689 Deallocate in Write Zeroes: Not Supported 00:08:53.689 Deallocated Guard Field: 0xFFFF 00:08:53.689 Flush: Supported 00:08:53.689 Reservation: Not Supported 00:08:53.689 Namespace Sharing Capabilities: Private 00:08:53.689 Size (in LBAs): 1048576 (4GiB) 00:08:53.689 Capacity (in LBAs): 1048576 (4GiB) 00:08:53.689 Utilization (in LBAs): 1048576 (4GiB) 00:08:53.689 Thin Provisioning: Not Supported 00:08:53.689 Per-NS Atomic Units: No 00:08:53.689 Maximum Single Source Range Length: 128 00:08:53.689 Maximum Copy Length: 128 00:08:53.689 Maximum Source Range Count: 128 00:08:53.689 NGUID/EUI64 Never Reused: No 00:08:53.689 Namespace Write Protected: No 00:08:53.689 Number of LBA Formats: 8 00:08:53.689 Current LBA Format: LBA Format #04 00:08:53.689 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.689 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.689 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.689 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.689 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.689 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.689 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.689 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.689 00:08:53.689 NVM Specific Namespace Data 00:08:53.689 =========================== 00:08:53.689 Logical Block Storage Tag Mask: 0 00:08:53.689 Protection Information Capabilities: 00:08:53.690 16b Guard Protection Information Storage Tag Support: No 00:08:53.690 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.690 Storage Tag Check Read Support: No 00:08:53.690 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.690 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.690 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.690 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.690 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.690 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.690 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.690 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.690 12:12:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:53.690 12:12:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:53.948 ===================================================== 00:08:53.948 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.948 ===================================================== 00:08:53.948 Controller Capabilities/Features 00:08:53.948 ================================ 00:08:53.948 Vendor ID: 1b36 00:08:53.948 Subsystem Vendor ID: 1af4 00:08:53.948 Serial Number: 12343 00:08:53.948 Model Number: QEMU NVMe Ctrl 00:08:53.948 Firmware Version: 8.0.0 00:08:53.948 Recommended Arb Burst: 6 00:08:53.948 IEEE OUI Identifier: 00 54 52 00:08:53.948 Multi-path I/O 00:08:53.948 May have multiple subsystem ports: No 00:08:53.948 May have multiple controllers: Yes 00:08:53.948 Associated with SR-IOV VF: No 00:08:53.948 Max Data Transfer Size: 524288 00:08:53.948 Max Number of Namespaces: 256 00:08:53.948 Max Number of I/O Queues: 64 00:08:53.948 NVMe Specification Version (VS): 1.4 00:08:53.948 NVMe Specification Version (Identify): 1.4 00:08:53.948 Maximum Queue Entries: 2048 00:08:53.948 Contiguous Queues Required: Yes 00:08:53.948 Arbitration Mechanisms Supported 00:08:53.948 Weighted Round Robin: Not Supported 00:08:53.948 Vendor Specific: Not Supported 00:08:53.948 Reset Timeout: 7500 ms 00:08:53.948 Doorbell Stride: 4 bytes 00:08:53.948 NVM Subsystem Reset: Not Supported 00:08:53.948 Command Sets Supported 00:08:53.948 NVM Command Set: Supported 00:08:53.948 Boot Partition: Not Supported 00:08:53.948 Memory Page Size Minimum: 4096 bytes 00:08:53.948 Memory Page Size Maximum: 65536 bytes 00:08:53.949 Persistent Memory Region: Not Supported 00:08:53.949 Optional Asynchronous Events Supported 00:08:53.949 Namespace Attribute Notices: Supported 00:08:53.949 Firmware Activation Notices: Not Supported 00:08:53.949 ANA Change Notices: Not Supported 00:08:53.949 PLE Aggregate Log Change Notices: Not Supported 00:08:53.949 LBA Status Info Alert Notices: Not Supported 00:08:53.949 EGE Aggregate Log Change Notices: Not Supported 00:08:53.949 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.949 Zone Descriptor Change Notices: Not Supported 00:08:53.949 Discovery Log Change Notices: Not Supported 00:08:53.949 Controller Attributes 00:08:53.949 128-bit Host Identifier: Not Supported 00:08:53.949 Non-Operational Permissive Mode: Not Supported 00:08:53.949 NVM Sets: Not Supported 00:08:53.949 Read Recovery Levels: Not Supported 00:08:53.949 Endurance Groups: Supported 00:08:53.949 Predictable Latency Mode: Not Supported 00:08:53.949 Traffic Based Keep ALive: Not Supported 00:08:53.949 Namespace Granularity: Not Supported 00:08:53.949 SQ Associations: Not Supported 00:08:53.949 UUID List: Not Supported 00:08:53.949 Multi-Domain Subsystem: Not Supported 00:08:53.949 Fixed Capacity Management: Not Supported 00:08:53.949 Variable Capacity Management: Not Supported 00:08:53.949 Delete Endurance Group: Not Supported 00:08:53.949 Delete NVM Set: Not Supported 00:08:53.949 Extended LBA Formats Supported: Supported 00:08:53.949 Flexible Data Placement Supported: Supported 00:08:53.949 00:08:53.949 Controller Memory Buffer Support 00:08:53.949 ================================ 00:08:53.949 Supported: No 00:08:53.949 00:08:53.949 Persistent Memory Region Support 00:08:53.949 ================================ 00:08:53.949 Supported: No 00:08:53.949 00:08:53.949 Admin Command Set Attributes 00:08:53.949 ============================ 00:08:53.949 Security Send/Receive: Not Supported 00:08:53.949 Format NVM: Supported 00:08:53.949 Firmware Activate/Download: Not Supported 00:08:53.949 Namespace Management: Supported 00:08:53.949 Device Self-Test: Not Supported 00:08:53.949 Directives: Supported 00:08:53.949 NVMe-MI: Not Supported 00:08:53.949 Virtualization Management: Not Supported 00:08:53.949 Doorbell Buffer Config: Supported 00:08:53.949 Get LBA Status Capability: Not Supported 00:08:53.949 Command & Feature Lockdown Capability: Not Supported 00:08:53.949 Abort Command Limit: 4 00:08:53.949 Async Event Request Limit: 4 00:08:53.949 Number of Firmware Slots: N/A 00:08:53.949 Firmware Slot 1 Read-Only: N/A 00:08:53.949 Firmware Activation Without Reset: N/A 00:08:53.949 Multiple Update Detection Support: N/A 00:08:53.949 Firmware Update Granularity: No Information Provided 00:08:53.949 Per-Namespace SMART Log: Yes 00:08:53.949 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.949 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:53.949 Command Effects Log Page: Supported 00:08:53.949 Get Log Page Extended Data: Supported 00:08:53.949 Telemetry Log Pages: Not Supported 00:08:53.949 Persistent Event Log Pages: Not Supported 00:08:53.949 Supported Log Pages Log Page: May Support 00:08:53.949 Commands Supported & Effects Log Page: Not Supported 00:08:53.949 Feature Identifiers & Effects Log Page:May Support 00:08:53.949 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.949 Data Area 4 for Telemetry Log: Not Supported 00:08:53.949 Error Log Page Entries Supported: 1 00:08:53.949 Keep Alive: Not Supported 00:08:53.949 00:08:53.949 NVM Command Set Attributes 00:08:53.949 ========================== 00:08:53.949 Submission Queue Entry Size 00:08:53.949 Max: 64 00:08:53.949 Min: 64 00:08:53.949 Completion Queue Entry Size 00:08:53.949 Max: 16 00:08:53.949 Min: 16 00:08:53.949 Number of Namespaces: 256 00:08:53.949 Compare Command: Supported 00:08:53.949 Write Uncorrectable Command: Not Supported 00:08:53.949 Dataset Management Command: Supported 00:08:53.949 Write Zeroes Command: Supported 00:08:53.949 Set Features Save Field: Supported 00:08:53.949 Reservations: Not Supported 00:08:53.949 Timestamp: Supported 00:08:53.949 Copy: Supported 00:08:53.949 Volatile Write Cache: Present 00:08:53.949 Atomic Write Unit (Normal): 1 00:08:53.949 Atomic Write Unit (PFail): 1 00:08:53.949 Atomic Compare & Write Unit: 1 00:08:53.949 Fused Compare & Write: Not Supported 00:08:53.949 Scatter-Gather List 00:08:53.949 SGL Command Set: Supported 00:08:53.949 SGL Keyed: Not Supported 00:08:53.949 SGL Bit Bucket Descriptor: Not Supported 00:08:53.949 SGL Metadata Pointer: Not Supported 00:08:53.949 Oversized SGL: Not Supported 00:08:53.949 SGL Metadata Address: Not Supported 00:08:53.949 SGL Offset: Not Supported 00:08:53.949 Transport SGL Data Block: Not Supported 00:08:53.949 Replay Protected Memory Block: Not Supported 00:08:53.949 00:08:53.949 Firmware Slot Information 00:08:53.949 ========================= 00:08:53.949 Active slot: 1 00:08:53.949 Slot 1 Firmware Revision: 1.0 00:08:53.949 00:08:53.949 00:08:53.949 Commands Supported and Effects 00:08:53.949 ============================== 00:08:53.949 Admin Commands 00:08:53.949 -------------- 00:08:53.949 Delete I/O Submission Queue (00h): Supported 00:08:53.949 Create I/O Submission Queue (01h): Supported 00:08:53.949 Get Log Page (02h): Supported 00:08:53.949 Delete I/O Completion Queue (04h): Supported 00:08:53.949 Create I/O Completion Queue (05h): Supported 00:08:53.949 Identify (06h): Supported 00:08:53.949 Abort (08h): Supported 00:08:53.949 Set Features (09h): Supported 00:08:53.949 Get Features (0Ah): Supported 00:08:53.949 Asynchronous Event Request (0Ch): Supported 00:08:53.949 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.949 Directive Send (19h): Supported 00:08:53.949 Directive Receive (1Ah): Supported 00:08:53.949 Virtualization Management (1Ch): Supported 00:08:53.949 Doorbell Buffer Config (7Ch): Supported 00:08:53.949 Format NVM (80h): Supported LBA-Change 00:08:53.949 I/O Commands 00:08:53.949 ------------ 00:08:53.949 Flush (00h): Supported LBA-Change 00:08:53.949 Write (01h): Supported LBA-Change 00:08:53.949 Read (02h): Supported 00:08:53.949 Compare (05h): Supported 00:08:53.949 Write Zeroes (08h): Supported LBA-Change 00:08:53.949 Dataset Management (09h): Supported LBA-Change 00:08:53.949 Unknown (0Ch): Supported 00:08:53.949 Unknown (12h): Supported 00:08:53.949 Copy (19h): Supported LBA-Change 00:08:53.949 Unknown (1Dh): Supported LBA-Change 00:08:53.949 00:08:53.949 Error Log 00:08:53.949 ========= 00:08:53.949 00:08:53.949 Arbitration 00:08:53.949 =========== 00:08:53.949 Arbitration Burst: no limit 00:08:53.949 00:08:53.949 Power Management 00:08:53.949 ================ 00:08:53.949 Number of Power States: 1 00:08:53.949 Current Power State: Power State #0 00:08:53.949 Power State #0: 00:08:53.949 Max Power: 25.00 W 00:08:53.949 Non-Operational State: Operational 00:08:53.949 Entry Latency: 16 microseconds 00:08:53.949 Exit Latency: 4 microseconds 00:08:53.949 Relative Read Throughput: 0 00:08:53.949 Relative Read Latency: 0 00:08:53.949 Relative Write Throughput: 0 00:08:53.949 Relative Write Latency: 0 00:08:53.949 Idle Power: Not Reported 00:08:53.949 Active Power: Not Reported 00:08:53.949 Non-Operational Permissive Mode: Not Supported 00:08:53.949 00:08:53.949 Health Information 00:08:53.949 ================== 00:08:53.949 Critical Warnings: 00:08:53.949 Available Spare Space: OK 00:08:53.949 Temperature: OK 00:08:53.949 Device Reliability: OK 00:08:53.949 Read Only: No 00:08:53.949 Volatile Memory Backup: OK 00:08:53.949 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.949 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.949 Available Spare: 0% 00:08:53.949 Available Spare Threshold: 0% 00:08:53.949 Life Percentage Used: 0% 00:08:53.949 Data Units Read: 818 00:08:53.949 Data Units Written: 747 00:08:53.949 Host Read Commands: 36995 00:08:53.949 Host Write Commands: 36418 00:08:53.949 Controller Busy Time: 0 minutes 00:08:53.949 Power Cycles: 0 00:08:53.949 Power On Hours: 0 hours 00:08:53.949 Unsafe Shutdowns: 0 00:08:53.949 Unrecoverable Media Errors: 0 00:08:53.949 Lifetime Error Log Entries: 0 00:08:53.949 Warning Temperature Time: 0 minutes 00:08:53.949 Critical Temperature Time: 0 minutes 00:08:53.949 00:08:53.949 Number of Queues 00:08:53.949 ================ 00:08:53.949 Number of I/O Submission Queues: 64 00:08:53.949 Number of I/O Completion Queues: 64 00:08:53.949 00:08:53.949 ZNS Specific Controller Data 00:08:53.949 ============================ 00:08:53.949 Zone Append Size Limit: 0 00:08:53.949 00:08:53.949 00:08:53.949 Active Namespaces 00:08:53.949 ================= 00:08:53.949 Namespace ID:1 00:08:53.949 Error Recovery Timeout: Unlimited 00:08:53.950 Command Set Identifier: NVM (00h) 00:08:53.950 Deallocate: Supported 00:08:53.950 Deallocated/Unwritten Error: Supported 00:08:53.950 Deallocated Read Value: All 0x00 00:08:53.950 Deallocate in Write Zeroes: Not Supported 00:08:53.950 Deallocated Guard Field: 0xFFFF 00:08:53.950 Flush: Supported 00:08:53.950 Reservation: Not Supported 00:08:53.950 Namespace Sharing Capabilities: Multiple Controllers 00:08:53.950 Size (in LBAs): 262144 (1GiB) 00:08:53.950 Capacity (in LBAs): 262144 (1GiB) 00:08:53.950 Utilization (in LBAs): 262144 (1GiB) 00:08:53.950 Thin Provisioning: Not Supported 00:08:53.950 Per-NS Atomic Units: No 00:08:53.950 Maximum Single Source Range Length: 128 00:08:53.950 Maximum Copy Length: 128 00:08:53.950 Maximum Source Range Count: 128 00:08:53.950 NGUID/EUI64 Never Reused: No 00:08:53.950 Namespace Write Protected: No 00:08:53.950 Endurance group ID: 1 00:08:53.950 Number of LBA Formats: 8 00:08:53.950 Current LBA Format: LBA Format #04 00:08:53.950 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.950 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.950 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.950 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.950 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.950 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.950 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.950 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.950 00:08:53.950 Get Feature FDP: 00:08:53.950 ================ 00:08:53.950 Enabled: Yes 00:08:53.950 FDP configuration index: 0 00:08:53.950 00:08:53.950 FDP configurations log page 00:08:53.950 =========================== 00:08:53.950 Number of FDP configurations: 1 00:08:53.950 Version: 0 00:08:53.950 Size: 112 00:08:53.950 FDP Configuration Descriptor: 0 00:08:53.950 Descriptor Size: 96 00:08:53.950 Reclaim Group Identifier format: 2 00:08:53.950 FDP Volatile Write Cache: Not Present 00:08:53.950 FDP Configuration: Valid 00:08:53.950 Vendor Specific Size: 0 00:08:53.950 Number of Reclaim Groups: 2 00:08:53.950 Number of Recalim Unit Handles: 8 00:08:53.950 Max Placement Identifiers: 128 00:08:53.950 Number of Namespaces Suppprted: 256 00:08:53.950 Reclaim unit Nominal Size: 6000000 bytes 00:08:53.950 Estimated Reclaim Unit Time Limit: Not Reported 00:08:53.950 RUH Desc #000: RUH Type: Initially Isolated 00:08:53.950 RUH Desc #001: RUH Type: Initially Isolated 00:08:53.950 RUH Desc #002: RUH Type: Initially Isolated 00:08:53.950 RUH Desc #003: RUH Type: Initially Isolated 00:08:53.950 RUH Desc #004: RUH Type: Initially Isolated 00:08:53.950 RUH Desc #005: RUH Type: Initially Isolated 00:08:53.950 RUH Desc #006: RUH Type: Initially Isolated 00:08:53.950 RUH Desc #007: RUH Type: Initially Isolated 00:08:53.950 00:08:53.950 FDP reclaim unit handle usage log page 00:08:53.950 ====================================== 00:08:53.950 Number of Reclaim Unit Handles: 8 00:08:53.950 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:53.950 RUH Usage Desc #001: RUH Attributes: Unused 00:08:53.950 RUH Usage Desc #002: RUH Attributes: Unused 00:08:53.950 RUH Usage Desc #003: RUH Attributes: Unused 00:08:53.950 RUH Usage Desc #004: RUH Attributes: Unused 00:08:53.950 RUH Usage Desc #005: RUH Attributes: Unused 00:08:53.950 RUH Usage Desc #006: RUH Attributes: Unused 00:08:53.950 RUH Usage Desc #007: RUH Attributes: Unused 00:08:53.950 00:08:53.950 FDP statistics log page 00:08:53.950 ======================= 00:08:53.950 Host bytes with metadata written: 470212608 00:08:53.950 Media bytes with metadata written: 470241280 00:08:53.950 Media bytes erased: 0 00:08:53.950 00:08:53.950 FDP events log page 00:08:53.950 =================== 00:08:53.950 Number of FDP events: 0 00:08:53.950 00:08:53.950 NVM Specific Namespace Data 00:08:53.950 =========================== 00:08:53.950 Logical Block Storage Tag Mask: 0 00:08:53.950 Protection Information Capabilities: 00:08:53.950 16b Guard Protection Information Storage Tag Support: No 00:08:53.950 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.950 Storage Tag Check Read Support: No 00:08:53.950 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.950 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.950 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.950 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.950 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.950 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.950 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.950 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.950 ************************************ 00:08:53.950 END TEST nvme_identify 00:08:53.950 ************************************ 00:08:53.950 00:08:53.950 real 0m1.390s 00:08:53.950 user 0m0.573s 00:08:53.950 sys 0m0.595s 00:08:53.950 12:12:24 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.950 12:12:24 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:53.950 12:12:24 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:53.950 12:12:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.950 12:12:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.950 12:12:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.950 ************************************ 00:08:53.950 START TEST nvme_perf 00:08:53.950 ************************************ 00:08:53.950 12:12:24 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:53.950 12:12:24 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:55.334 Initializing NVMe Controllers 00:08:55.334 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.334 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.334 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.334 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.334 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:55.334 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:55.334 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:55.334 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:55.334 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:55.334 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:55.334 Initialization complete. Launching workers. 00:08:55.334 ======================================================== 00:08:55.334 Latency(us) 00:08:55.334 Device Information : IOPS MiB/s Average min max 00:08:55.334 PCIE (0000:00:10.0) NSID 1 from core 0: 12025.34 140.92 10657.74 8421.28 41010.58 00:08:55.334 PCIE (0000:00:11.0) NSID 1 from core 0: 12025.34 140.92 10640.94 8425.38 39651.22 00:08:55.334 PCIE (0000:00:13.0) NSID 1 from core 0: 12025.34 140.92 10622.40 8324.60 38861.74 00:08:55.334 PCIE (0000:00:12.0) NSID 1 from core 0: 12025.34 140.92 10602.85 8428.18 37238.87 00:08:55.334 PCIE (0000:00:12.0) NSID 2 from core 0: 12025.34 140.92 10584.20 8569.04 35739.69 00:08:55.334 PCIE (0000:00:12.0) NSID 3 from core 0: 12089.30 141.67 10509.68 8492.41 27429.27 00:08:55.334 ======================================================== 00:08:55.334 Total : 72215.99 846.28 10602.89 8324.60 41010.58 00:08:55.334 00:08:55.334 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:55.334 ================================================================================= 00:08:55.334 1.00000% : 8670.917us 00:08:55.334 10.00000% : 9124.628us 00:08:55.334 25.00000% : 9477.514us 00:08:55.334 50.00000% : 10032.049us 00:08:55.334 75.00000% : 10838.646us 00:08:55.334 90.00000% : 12703.902us 00:08:55.334 95.00000% : 14014.622us 00:08:55.334 98.00000% : 15426.166us 00:08:55.334 99.00000% : 29440.788us 00:08:55.334 99.50000% : 39523.249us 00:08:55.334 99.90000% : 40733.145us 00:08:55.334 99.99000% : 41136.443us 00:08:55.334 99.99900% : 41136.443us 00:08:55.334 99.99990% : 41136.443us 00:08:55.334 99.99999% : 41136.443us 00:08:55.334 00:08:55.334 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:55.334 ================================================================================= 00:08:55.334 1.00000% : 8771.742us 00:08:55.334 10.00000% : 9175.040us 00:08:55.334 25.00000% : 9527.926us 00:08:55.334 50.00000% : 9981.637us 00:08:55.334 75.00000% : 10838.646us 00:08:55.334 90.00000% : 12703.902us 00:08:55.334 95.00000% : 13913.797us 00:08:55.334 98.00000% : 15224.517us 00:08:55.334 99.00000% : 28835.840us 00:08:55.334 99.50000% : 38111.705us 00:08:55.334 99.90000% : 39321.600us 00:08:55.334 99.99000% : 39724.898us 00:08:55.334 99.99900% : 39724.898us 00:08:55.334 99.99990% : 39724.898us 00:08:55.334 99.99999% : 39724.898us 00:08:55.334 00:08:55.334 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:55.334 ================================================================================= 00:08:55.334 1.00000% : 8670.917us 00:08:55.334 10.00000% : 9175.040us 00:08:55.334 25.00000% : 9477.514us 00:08:55.334 50.00000% : 9981.637us 00:08:55.334 75.00000% : 10939.471us 00:08:55.334 90.00000% : 12502.252us 00:08:55.334 95.00000% : 13712.148us 00:08:55.334 98.00000% : 15224.517us 00:08:55.334 99.00000% : 29037.489us 00:08:55.334 99.50000% : 37305.108us 00:08:55.334 99.90000% : 38515.003us 00:08:55.334 99.99000% : 38918.302us 00:08:55.334 99.99900% : 38918.302us 00:08:55.334 99.99990% : 38918.302us 00:08:55.334 99.99999% : 38918.302us 00:08:55.334 00:08:55.334 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:55.334 ================================================================================= 00:08:55.334 1.00000% : 8721.329us 00:08:55.334 10.00000% : 9175.040us 00:08:55.334 25.00000% : 9477.514us 00:08:55.334 50.00000% : 9981.637us 00:08:55.334 75.00000% : 10939.471us 00:08:55.334 90.00000% : 12451.840us 00:08:55.334 95.00000% : 13611.323us 00:08:55.334 98.00000% : 15224.517us 00:08:55.334 99.00000% : 27827.594us 00:08:55.334 99.50000% : 35691.914us 00:08:55.334 99.90000% : 36901.809us 00:08:55.334 99.99000% : 37305.108us 00:08:55.334 99.99900% : 37305.108us 00:08:55.334 99.99990% : 37305.108us 00:08:55.334 99.99999% : 37305.108us 00:08:55.334 00:08:55.334 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:55.334 ================================================================================= 00:08:55.334 1.00000% : 8771.742us 00:08:55.334 10.00000% : 9175.040us 00:08:55.334 25.00000% : 9477.514us 00:08:55.334 50.00000% : 9981.637us 00:08:55.334 75.00000% : 10939.471us 00:08:55.334 90.00000% : 12451.840us 00:08:55.334 95.00000% : 13812.972us 00:08:55.334 98.00000% : 15325.342us 00:08:55.334 99.00000% : 26214.400us 00:08:55.334 99.50000% : 34078.720us 00:08:55.334 99.90000% : 35490.265us 00:08:55.334 99.99000% : 35893.563us 00:08:55.334 99.99900% : 35893.563us 00:08:55.334 99.99990% : 35893.563us 00:08:55.334 99.99999% : 35893.563us 00:08:55.334 00:08:55.334 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:55.334 ================================================================================= 00:08:55.334 1.00000% : 8771.742us 00:08:55.334 10.00000% : 9175.040us 00:08:55.334 25.00000% : 9477.514us 00:08:55.334 50.00000% : 9981.637us 00:08:55.334 75.00000% : 10889.058us 00:08:55.334 90.00000% : 12653.489us 00:08:55.334 95.00000% : 13913.797us 00:08:55.334 98.00000% : 15325.342us 00:08:55.334 99.00000% : 16434.412us 00:08:55.334 99.50000% : 26012.751us 00:08:55.334 99.90000% : 27222.646us 00:08:55.334 99.99000% : 27424.295us 00:08:55.334 99.99900% : 27625.945us 00:08:55.334 99.99990% : 27625.945us 00:08:55.334 99.99999% : 27625.945us 00:08:55.334 00:08:55.334 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:55.334 ============================================================================== 00:08:55.334 Range in us Cumulative IO count 00:08:55.334 8418.855 - 8469.268: 0.0499% ( 6) 00:08:55.334 8469.268 - 8519.680: 0.1662% ( 14) 00:08:55.334 8519.680 - 8570.092: 0.4820% ( 38) 00:08:55.334 8570.092 - 8620.505: 0.7231% ( 29) 00:08:55.334 8620.505 - 8670.917: 1.0389% ( 38) 00:08:55.334 8670.917 - 8721.329: 1.4461% ( 49) 00:08:55.334 8721.329 - 8771.742: 2.1193% ( 81) 00:08:55.334 8771.742 - 8822.154: 2.8590% ( 89) 00:08:55.334 8822.154 - 8872.566: 3.6652% ( 97) 00:08:55.334 8872.566 - 8922.978: 4.8454% ( 142) 00:08:55.334 8922.978 - 8973.391: 6.1586% ( 158) 00:08:55.334 8973.391 - 9023.803: 7.6961% ( 185) 00:08:55.334 9023.803 - 9074.215: 9.1506% ( 175) 00:08:55.334 9074.215 - 9124.628: 10.8295% ( 202) 00:08:55.334 9124.628 - 9175.040: 12.5166% ( 203) 00:08:55.334 9175.040 - 9225.452: 14.2453% ( 208) 00:08:55.335 9225.452 - 9275.865: 16.1070% ( 224) 00:08:55.335 9275.865 - 9326.277: 18.1682% ( 248) 00:08:55.335 9326.277 - 9376.689: 20.5452% ( 286) 00:08:55.335 9376.689 - 9427.102: 22.7975% ( 271) 00:08:55.335 9427.102 - 9477.514: 25.1745% ( 286) 00:08:55.335 9477.514 - 9527.926: 27.7593% ( 311) 00:08:55.335 9527.926 - 9578.338: 30.1114% ( 283) 00:08:55.335 9578.338 - 9628.751: 32.5632% ( 295) 00:08:55.335 9628.751 - 9679.163: 35.2726% ( 326) 00:08:55.335 9679.163 - 9729.575: 37.8989% ( 316) 00:08:55.335 9729.575 - 9779.988: 40.2094% ( 278) 00:08:55.335 9779.988 - 9830.400: 42.9854% ( 334) 00:08:55.335 9830.400 - 9880.812: 45.2793% ( 276) 00:08:55.335 9880.812 - 9931.225: 47.6147% ( 281) 00:08:55.335 9931.225 - 9981.637: 49.8836% ( 273) 00:08:55.335 9981.637 - 10032.049: 52.0612% ( 262) 00:08:55.335 10032.049 - 10082.462: 54.2304% ( 261) 00:08:55.335 10082.462 - 10132.874: 55.9757% ( 210) 00:08:55.335 10132.874 - 10183.286: 57.7543% ( 214) 00:08:55.335 10183.286 - 10233.698: 59.7074% ( 235) 00:08:55.335 10233.698 - 10284.111: 61.4195% ( 206) 00:08:55.335 10284.111 - 10334.523: 63.1150% ( 204) 00:08:55.335 10334.523 - 10384.935: 64.7689% ( 199) 00:08:55.335 10384.935 - 10435.348: 66.1902% ( 171) 00:08:55.335 10435.348 - 10485.760: 67.5781% ( 167) 00:08:55.335 10485.760 - 10536.172: 68.8082% ( 148) 00:08:55.335 10536.172 - 10586.585: 69.9219% ( 134) 00:08:55.335 10586.585 - 10636.997: 71.2101% ( 155) 00:08:55.335 10636.997 - 10687.409: 72.2490% ( 125) 00:08:55.335 10687.409 - 10737.822: 73.2796% ( 124) 00:08:55.335 10737.822 - 10788.234: 74.2354% ( 115) 00:08:55.335 10788.234 - 10838.646: 75.1247% ( 107) 00:08:55.335 10838.646 - 10889.058: 75.9973% ( 105) 00:08:55.335 10889.058 - 10939.471: 76.6539% ( 79) 00:08:55.335 10939.471 - 10989.883: 77.3853% ( 88) 00:08:55.335 10989.883 - 11040.295: 78.0834% ( 84) 00:08:55.335 11040.295 - 11090.708: 78.7733% ( 83) 00:08:55.335 11090.708 - 11141.120: 79.2470% ( 57) 00:08:55.335 11141.120 - 11191.532: 79.8122% ( 68) 00:08:55.335 11191.532 - 11241.945: 80.3275% ( 62) 00:08:55.335 11241.945 - 11292.357: 80.9259% ( 72) 00:08:55.335 11292.357 - 11342.769: 81.4661% ( 65) 00:08:55.335 11342.769 - 11393.182: 82.1061% ( 77) 00:08:55.335 11393.182 - 11443.594: 82.4801% ( 45) 00:08:55.335 11443.594 - 11494.006: 82.9289% ( 54) 00:08:55.335 11494.006 - 11544.418: 83.3860% ( 55) 00:08:55.335 11544.418 - 11594.831: 83.7517% ( 44) 00:08:55.335 11594.831 - 11645.243: 84.1589% ( 49) 00:08:55.335 11645.243 - 11695.655: 84.5412% ( 46) 00:08:55.335 11695.655 - 11746.068: 84.8072% ( 32) 00:08:55.335 11746.068 - 11796.480: 85.0233% ( 26) 00:08:55.335 11796.480 - 11846.892: 85.2892% ( 32) 00:08:55.335 11846.892 - 11897.305: 85.5136% ( 27) 00:08:55.335 11897.305 - 11947.717: 85.8211% ( 37) 00:08:55.335 11947.717 - 11998.129: 86.0954% ( 33) 00:08:55.335 11998.129 - 12048.542: 86.3614% ( 32) 00:08:55.335 12048.542 - 12098.954: 86.6107% ( 30) 00:08:55.335 12098.954 - 12149.366: 86.9598% ( 42) 00:08:55.335 12149.366 - 12199.778: 87.2424% ( 34) 00:08:55.335 12199.778 - 12250.191: 87.5249% ( 34) 00:08:55.335 12250.191 - 12300.603: 87.8823% ( 43) 00:08:55.335 12300.603 - 12351.015: 88.1316% ( 30) 00:08:55.335 12351.015 - 12401.428: 88.4890% ( 43) 00:08:55.335 12401.428 - 12451.840: 88.7633% ( 33) 00:08:55.335 12451.840 - 12502.252: 89.1290% ( 44) 00:08:55.335 12502.252 - 12552.665: 89.3783% ( 30) 00:08:55.335 12552.665 - 12603.077: 89.5944% ( 26) 00:08:55.335 12603.077 - 12653.489: 89.8936% ( 36) 00:08:55.335 12653.489 - 12703.902: 90.1263% ( 28) 00:08:55.335 12703.902 - 12754.314: 90.3424% ( 26) 00:08:55.335 12754.314 - 12804.726: 90.5585% ( 26) 00:08:55.335 12804.726 - 12855.138: 90.7829% ( 27) 00:08:55.335 12855.138 - 12905.551: 90.9990% ( 26) 00:08:55.335 12905.551 - 13006.375: 91.5143% ( 62) 00:08:55.335 13006.375 - 13107.200: 91.8966% ( 46) 00:08:55.335 13107.200 - 13208.025: 92.2540% ( 43) 00:08:55.335 13208.025 - 13308.849: 92.8108% ( 67) 00:08:55.335 13308.849 - 13409.674: 93.2015% ( 47) 00:08:55.335 13409.674 - 13510.498: 93.5755% ( 45) 00:08:55.335 13510.498 - 13611.323: 93.9495% ( 45) 00:08:55.335 13611.323 - 13712.148: 94.3152% ( 44) 00:08:55.335 13712.148 - 13812.972: 94.6642% ( 42) 00:08:55.335 13812.972 - 13913.797: 94.9717% ( 37) 00:08:55.335 13913.797 - 14014.622: 95.2959% ( 39) 00:08:55.335 14014.622 - 14115.446: 95.5369% ( 29) 00:08:55.335 14115.446 - 14216.271: 95.7779% ( 29) 00:08:55.335 14216.271 - 14317.095: 96.0605% ( 34) 00:08:55.335 14317.095 - 14417.920: 96.3680% ( 37) 00:08:55.335 14417.920 - 14518.745: 96.6007% ( 28) 00:08:55.335 14518.745 - 14619.569: 96.8168% ( 26) 00:08:55.335 14619.569 - 14720.394: 97.0495% ( 28) 00:08:55.335 14720.394 - 14821.218: 97.2656% ( 26) 00:08:55.335 14821.218 - 14922.043: 97.4318% ( 20) 00:08:55.335 14922.043 - 15022.868: 97.5814% ( 18) 00:08:55.335 15022.868 - 15123.692: 97.7061% ( 15) 00:08:55.335 15123.692 - 15224.517: 97.8142% ( 13) 00:08:55.335 15224.517 - 15325.342: 97.9555% ( 17) 00:08:55.335 15325.342 - 15426.166: 98.1217% ( 20) 00:08:55.335 15426.166 - 15526.991: 98.2713% ( 18) 00:08:55.335 15526.991 - 15627.815: 98.3876% ( 14) 00:08:55.335 15627.815 - 15728.640: 98.5206% ( 16) 00:08:55.335 15728.640 - 15829.465: 98.5788% ( 7) 00:08:55.335 15829.465 - 15930.289: 98.6536% ( 9) 00:08:55.335 15930.289 - 16031.114: 98.6785% ( 3) 00:08:55.335 16031.114 - 16131.938: 98.7201% ( 5) 00:08:55.335 16131.938 - 16232.763: 98.7699% ( 6) 00:08:55.335 16232.763 - 16333.588: 98.7949% ( 3) 00:08:55.335 16333.588 - 16434.412: 98.8447% ( 6) 00:08:55.335 16434.412 - 16535.237: 98.8946% ( 6) 00:08:55.335 16535.237 - 16636.062: 98.9362% ( 5) 00:08:55.335 29037.489 - 29239.138: 98.9445% ( 1) 00:08:55.335 29239.138 - 29440.788: 99.0110% ( 8) 00:08:55.335 29440.788 - 29642.437: 99.0608% ( 6) 00:08:55.335 29642.437 - 29844.086: 99.1273% ( 8) 00:08:55.335 29844.086 - 30045.735: 99.1855% ( 7) 00:08:55.335 30045.735 - 30247.385: 99.2437% ( 7) 00:08:55.335 30247.385 - 30449.034: 99.3102% ( 8) 00:08:55.335 30449.034 - 30650.683: 99.3684% ( 7) 00:08:55.335 30650.683 - 30852.332: 99.4182% ( 6) 00:08:55.335 30852.332 - 31053.982: 99.4681% ( 6) 00:08:55.335 39119.951 - 39321.600: 99.4930% ( 3) 00:08:55.335 39321.600 - 39523.249: 99.5429% ( 6) 00:08:55.335 39523.249 - 39724.898: 99.5928% ( 6) 00:08:55.335 39724.898 - 39926.548: 99.6509% ( 7) 00:08:55.335 39926.548 - 40128.197: 99.7174% ( 8) 00:08:55.335 40128.197 - 40329.846: 99.7756% ( 7) 00:08:55.335 40329.846 - 40531.495: 99.8421% ( 8) 00:08:55.335 40531.495 - 40733.145: 99.9003% ( 7) 00:08:55.335 40733.145 - 40934.794: 99.9668% ( 8) 00:08:55.335 40934.794 - 41136.443: 100.0000% ( 4) 00:08:55.335 00:08:55.335 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:55.335 ============================================================================== 00:08:55.335 Range in us Cumulative IO count 00:08:55.335 8418.855 - 8469.268: 0.0332% ( 4) 00:08:55.335 8469.268 - 8519.680: 0.0831% ( 6) 00:08:55.335 8519.680 - 8570.092: 0.1413% ( 7) 00:08:55.335 8570.092 - 8620.505: 0.2493% ( 13) 00:08:55.335 8620.505 - 8670.917: 0.4405% ( 23) 00:08:55.335 8670.917 - 8721.329: 0.7812% ( 41) 00:08:55.335 8721.329 - 8771.742: 1.3298% ( 66) 00:08:55.335 8771.742 - 8822.154: 1.9781% ( 78) 00:08:55.335 8822.154 - 8872.566: 2.8590% ( 106) 00:08:55.335 8872.566 - 8922.978: 3.7650% ( 109) 00:08:55.335 8922.978 - 8973.391: 4.8122% ( 126) 00:08:55.335 8973.391 - 9023.803: 6.1087% ( 156) 00:08:55.335 9023.803 - 9074.215: 7.5216% ( 170) 00:08:55.335 9074.215 - 9124.628: 9.0841% ( 188) 00:08:55.335 9124.628 - 9175.040: 10.9126% ( 220) 00:08:55.335 9175.040 - 9225.452: 12.8075% ( 228) 00:08:55.335 9225.452 - 9275.865: 14.8354% ( 244) 00:08:55.335 9275.865 - 9326.277: 17.0795% ( 270) 00:08:55.335 9326.277 - 9376.689: 19.5562% ( 298) 00:08:55.335 9376.689 - 9427.102: 22.1742% ( 315) 00:08:55.335 9427.102 - 9477.514: 24.8587% ( 323) 00:08:55.335 9477.514 - 9527.926: 27.4767% ( 315) 00:08:55.335 9527.926 - 9578.338: 30.0366% ( 308) 00:08:55.335 9578.338 - 9628.751: 32.6463% ( 314) 00:08:55.335 9628.751 - 9679.163: 35.3391% ( 324) 00:08:55.335 9679.163 - 9729.575: 37.7909% ( 295) 00:08:55.335 9729.575 - 9779.988: 40.4006% ( 314) 00:08:55.335 9779.988 - 9830.400: 42.9604% ( 308) 00:08:55.335 9830.400 - 9880.812: 45.4289% ( 297) 00:08:55.335 9880.812 - 9931.225: 47.7560% ( 280) 00:08:55.335 9931.225 - 9981.637: 50.1164% ( 284) 00:08:55.335 9981.637 - 10032.049: 52.4352% ( 279) 00:08:55.335 10032.049 - 10082.462: 54.5878% ( 259) 00:08:55.335 10082.462 - 10132.874: 56.5991% ( 242) 00:08:55.335 10132.874 - 10183.286: 58.6270% ( 244) 00:08:55.335 10183.286 - 10233.698: 60.2975% ( 201) 00:08:55.335 10233.698 - 10284.111: 61.9847% ( 203) 00:08:55.335 10284.111 - 10334.523: 63.5057% ( 183) 00:08:55.336 10334.523 - 10384.935: 64.9102% ( 169) 00:08:55.336 10384.935 - 10435.348: 66.2899% ( 166) 00:08:55.336 10435.348 - 10485.760: 67.5698% ( 154) 00:08:55.336 10485.760 - 10536.172: 68.8331% ( 152) 00:08:55.336 10536.172 - 10586.585: 70.0549% ( 147) 00:08:55.336 10586.585 - 10636.997: 71.2018% ( 138) 00:08:55.336 10636.997 - 10687.409: 72.2490% ( 126) 00:08:55.336 10687.409 - 10737.822: 73.3211% ( 129) 00:08:55.336 10737.822 - 10788.234: 74.3268% ( 121) 00:08:55.336 10788.234 - 10838.646: 75.2826% ( 115) 00:08:55.336 10838.646 - 10889.058: 76.1802% ( 108) 00:08:55.336 10889.058 - 10939.471: 77.0612% ( 106) 00:08:55.336 10939.471 - 10989.883: 77.8092% ( 90) 00:08:55.336 10989.883 - 11040.295: 78.4658% ( 79) 00:08:55.336 11040.295 - 11090.708: 79.1473% ( 82) 00:08:55.336 11090.708 - 11141.120: 79.7540% ( 73) 00:08:55.336 11141.120 - 11191.532: 80.3358% ( 70) 00:08:55.336 11191.532 - 11241.945: 80.8511% ( 62) 00:08:55.336 11241.945 - 11292.357: 81.3414% ( 59) 00:08:55.336 11292.357 - 11342.769: 81.8401% ( 60) 00:08:55.336 11342.769 - 11393.182: 82.3305% ( 59) 00:08:55.336 11393.182 - 11443.594: 82.7959% ( 56) 00:08:55.336 11443.594 - 11494.006: 83.2530% ( 55) 00:08:55.336 11494.006 - 11544.418: 83.6935% ( 53) 00:08:55.336 11544.418 - 11594.831: 84.1340% ( 53) 00:08:55.336 11594.831 - 11645.243: 84.5412% ( 49) 00:08:55.336 11645.243 - 11695.655: 84.8570% ( 38) 00:08:55.336 11695.655 - 11746.068: 85.2311% ( 45) 00:08:55.336 11746.068 - 11796.480: 85.5469% ( 38) 00:08:55.336 11796.480 - 11846.892: 85.8710% ( 39) 00:08:55.336 11846.892 - 11897.305: 86.1619% ( 35) 00:08:55.336 11897.305 - 11947.717: 86.4195% ( 31) 00:08:55.336 11947.717 - 11998.129: 86.6606% ( 29) 00:08:55.336 11998.129 - 12048.542: 86.9182% ( 31) 00:08:55.336 12048.542 - 12098.954: 87.1509% ( 28) 00:08:55.336 12098.954 - 12149.366: 87.4003% ( 30) 00:08:55.336 12149.366 - 12199.778: 87.6662% ( 32) 00:08:55.336 12199.778 - 12250.191: 87.9405% ( 33) 00:08:55.336 12250.191 - 12300.603: 88.2231% ( 34) 00:08:55.336 12300.603 - 12351.015: 88.4973% ( 33) 00:08:55.336 12351.015 - 12401.428: 88.7633% ( 32) 00:08:55.336 12401.428 - 12451.840: 89.0209% ( 31) 00:08:55.336 12451.840 - 12502.252: 89.2370% ( 26) 00:08:55.336 12502.252 - 12552.665: 89.4531% ( 26) 00:08:55.336 12552.665 - 12603.077: 89.6609% ( 25) 00:08:55.336 12603.077 - 12653.489: 89.8853% ( 27) 00:08:55.336 12653.489 - 12703.902: 90.1263% ( 29) 00:08:55.336 12703.902 - 12754.314: 90.3341% ( 25) 00:08:55.336 12754.314 - 12804.726: 90.5751% ( 29) 00:08:55.336 12804.726 - 12855.138: 90.8494% ( 33) 00:08:55.336 12855.138 - 12905.551: 91.0904% ( 29) 00:08:55.336 12905.551 - 13006.375: 91.5974% ( 61) 00:08:55.336 13006.375 - 13107.200: 92.1626% ( 68) 00:08:55.336 13107.200 - 13208.025: 92.6612% ( 60) 00:08:55.336 13208.025 - 13308.849: 93.0685% ( 49) 00:08:55.336 13308.849 - 13409.674: 93.4425% ( 45) 00:08:55.336 13409.674 - 13510.498: 93.7916% ( 42) 00:08:55.336 13510.498 - 13611.323: 94.2154% ( 51) 00:08:55.336 13611.323 - 13712.148: 94.5728% ( 43) 00:08:55.336 13712.148 - 13812.972: 94.8720% ( 36) 00:08:55.336 13812.972 - 13913.797: 95.1878% ( 38) 00:08:55.336 13913.797 - 14014.622: 95.4787% ( 35) 00:08:55.336 14014.622 - 14115.446: 95.7862% ( 37) 00:08:55.336 14115.446 - 14216.271: 96.0605% ( 33) 00:08:55.336 14216.271 - 14317.095: 96.3182% ( 31) 00:08:55.336 14317.095 - 14417.920: 96.5592% ( 29) 00:08:55.336 14417.920 - 14518.745: 96.8002% ( 29) 00:08:55.336 14518.745 - 14619.569: 97.0080% ( 25) 00:08:55.336 14619.569 - 14720.394: 97.2573% ( 30) 00:08:55.336 14720.394 - 14821.218: 97.5066% ( 30) 00:08:55.336 14821.218 - 14922.043: 97.7144% ( 25) 00:08:55.336 14922.043 - 15022.868: 97.8142% ( 12) 00:08:55.336 15022.868 - 15123.692: 97.9222% ( 13) 00:08:55.336 15123.692 - 15224.517: 98.0219% ( 12) 00:08:55.336 15224.517 - 15325.342: 98.1300% ( 13) 00:08:55.336 15325.342 - 15426.166: 98.2630% ( 16) 00:08:55.336 15426.166 - 15526.991: 98.4126% ( 18) 00:08:55.336 15526.991 - 15627.815: 98.5123% ( 12) 00:08:55.336 15627.815 - 15728.640: 98.5622% ( 6) 00:08:55.336 15728.640 - 15829.465: 98.6203% ( 7) 00:08:55.336 15829.465 - 15930.289: 98.6702% ( 6) 00:08:55.336 15930.289 - 16031.114: 98.7201% ( 6) 00:08:55.336 16031.114 - 16131.938: 98.7699% ( 6) 00:08:55.336 16131.938 - 16232.763: 98.8115% ( 5) 00:08:55.336 16232.763 - 16333.588: 98.8531% ( 5) 00:08:55.336 16333.588 - 16434.412: 98.9029% ( 6) 00:08:55.336 16434.412 - 16535.237: 98.9362% ( 4) 00:08:55.336 28634.191 - 28835.840: 99.0027% ( 8) 00:08:55.336 28835.840 - 29037.489: 99.0691% ( 8) 00:08:55.336 29037.489 - 29239.138: 99.1356% ( 8) 00:08:55.336 29239.138 - 29440.788: 99.1938% ( 7) 00:08:55.336 29440.788 - 29642.437: 99.2603% ( 8) 00:08:55.336 29642.437 - 29844.086: 99.3185% ( 7) 00:08:55.336 29844.086 - 30045.735: 99.3850% ( 8) 00:08:55.336 30045.735 - 30247.385: 99.4515% ( 8) 00:08:55.336 30247.385 - 30449.034: 99.4681% ( 2) 00:08:55.336 37910.055 - 38111.705: 99.5346% ( 8) 00:08:55.336 38111.705 - 38313.354: 99.6011% ( 8) 00:08:55.336 38313.354 - 38515.003: 99.6592% ( 7) 00:08:55.336 38515.003 - 38716.652: 99.7257% ( 8) 00:08:55.336 38716.652 - 38918.302: 99.7756% ( 6) 00:08:55.336 38918.302 - 39119.951: 99.8421% ( 8) 00:08:55.336 39119.951 - 39321.600: 99.9003% ( 7) 00:08:55.336 39321.600 - 39523.249: 99.9584% ( 7) 00:08:55.336 39523.249 - 39724.898: 100.0000% ( 5) 00:08:55.336 00:08:55.336 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:55.336 ============================================================================== 00:08:55.336 Range in us Cumulative IO count 00:08:55.336 8318.031 - 8368.443: 0.0416% ( 5) 00:08:55.336 8368.443 - 8418.855: 0.0665% ( 3) 00:08:55.336 8418.855 - 8469.268: 0.1330% ( 8) 00:08:55.336 8469.268 - 8519.680: 0.3408% ( 25) 00:08:55.336 8519.680 - 8570.092: 0.4654% ( 15) 00:08:55.336 8570.092 - 8620.505: 0.7563% ( 35) 00:08:55.336 8620.505 - 8670.917: 1.1137% ( 43) 00:08:55.336 8670.917 - 8721.329: 1.5459% ( 52) 00:08:55.336 8721.329 - 8771.742: 2.0778% ( 64) 00:08:55.336 8771.742 - 8822.154: 2.7759% ( 84) 00:08:55.336 8822.154 - 8872.566: 3.6652% ( 107) 00:08:55.336 8872.566 - 8922.978: 4.6543% ( 119) 00:08:55.336 8922.978 - 8973.391: 5.7264% ( 129) 00:08:55.336 8973.391 - 9023.803: 6.9149% ( 143) 00:08:55.336 9023.803 - 9074.215: 8.4026% ( 179) 00:08:55.336 9074.215 - 9124.628: 9.9318% ( 184) 00:08:55.336 9124.628 - 9175.040: 11.6107% ( 202) 00:08:55.336 9175.040 - 9225.452: 13.6553% ( 246) 00:08:55.336 9225.452 - 9275.865: 15.7746% ( 255) 00:08:55.336 9275.865 - 9326.277: 17.9688% ( 264) 00:08:55.336 9326.277 - 9376.689: 20.2543% ( 275) 00:08:55.336 9376.689 - 9427.102: 22.6147% ( 284) 00:08:55.336 9427.102 - 9477.514: 25.0499% ( 293) 00:08:55.336 9477.514 - 9527.926: 27.6845% ( 317) 00:08:55.336 9527.926 - 9578.338: 30.2610% ( 310) 00:08:55.336 9578.338 - 9628.751: 32.8707% ( 314) 00:08:55.336 9628.751 - 9679.163: 35.5136% ( 318) 00:08:55.336 9679.163 - 9729.575: 37.9488% ( 293) 00:08:55.336 9729.575 - 9779.988: 40.4671% ( 303) 00:08:55.336 9779.988 - 9830.400: 42.9438% ( 298) 00:08:55.336 9830.400 - 9880.812: 45.4289% ( 299) 00:08:55.336 9880.812 - 9931.225: 47.9471% ( 303) 00:08:55.336 9931.225 - 9981.637: 50.2743% ( 280) 00:08:55.336 9981.637 - 10032.049: 52.4269% ( 259) 00:08:55.336 10032.049 - 10082.462: 54.6293% ( 265) 00:08:55.336 10082.462 - 10132.874: 56.5326% ( 229) 00:08:55.336 10132.874 - 10183.286: 58.4358% ( 229) 00:08:55.336 10183.286 - 10233.698: 60.3059% ( 225) 00:08:55.336 10233.698 - 10284.111: 61.8434% ( 185) 00:08:55.336 10284.111 - 10334.523: 63.3062% ( 176) 00:08:55.336 10334.523 - 10384.935: 64.6858% ( 166) 00:08:55.336 10384.935 - 10435.348: 66.0821% ( 168) 00:08:55.337 10435.348 - 10485.760: 67.3039% ( 147) 00:08:55.337 10485.760 - 10536.172: 68.5007% ( 144) 00:08:55.337 10536.172 - 10586.585: 69.5811% ( 130) 00:08:55.337 10586.585 - 10636.997: 70.5452% ( 116) 00:08:55.337 10636.997 - 10687.409: 71.4013% ( 103) 00:08:55.337 10687.409 - 10737.822: 72.3072% ( 109) 00:08:55.337 10737.822 - 10788.234: 73.1549% ( 102) 00:08:55.337 10788.234 - 10838.646: 73.9860% ( 100) 00:08:55.337 10838.646 - 10889.058: 74.8670% ( 106) 00:08:55.337 10889.058 - 10939.471: 75.6649% ( 96) 00:08:55.337 10939.471 - 10989.883: 76.4628% ( 96) 00:08:55.337 10989.883 - 11040.295: 77.2939% ( 100) 00:08:55.337 11040.295 - 11090.708: 78.0834% ( 95) 00:08:55.337 11090.708 - 11141.120: 78.6985% ( 74) 00:08:55.337 11141.120 - 11191.532: 79.3551% ( 79) 00:08:55.337 11191.532 - 11241.945: 79.9202% ( 68) 00:08:55.337 11241.945 - 11292.357: 80.6184% ( 84) 00:08:55.337 11292.357 - 11342.769: 81.2084% ( 71) 00:08:55.337 11342.769 - 11393.182: 81.7154% ( 61) 00:08:55.337 11393.182 - 11443.594: 82.1892% ( 57) 00:08:55.337 11443.594 - 11494.006: 82.7128% ( 63) 00:08:55.337 11494.006 - 11544.418: 83.2945% ( 70) 00:08:55.337 11544.418 - 11594.831: 83.7350% ( 53) 00:08:55.337 11594.831 - 11645.243: 84.2337% ( 60) 00:08:55.337 11645.243 - 11695.655: 84.6742% ( 53) 00:08:55.337 11695.655 - 11746.068: 85.0898% ( 50) 00:08:55.337 11746.068 - 11796.480: 85.4721% ( 46) 00:08:55.337 11796.480 - 11846.892: 85.8211% ( 42) 00:08:55.337 11846.892 - 11897.305: 86.2201% ( 48) 00:08:55.337 11897.305 - 11947.717: 86.6024% ( 46) 00:08:55.337 11947.717 - 11998.129: 86.9764% ( 45) 00:08:55.337 11998.129 - 12048.542: 87.3587% ( 46) 00:08:55.337 12048.542 - 12098.954: 87.6828% ( 39) 00:08:55.337 12098.954 - 12149.366: 88.0153% ( 40) 00:08:55.337 12149.366 - 12199.778: 88.3477% ( 40) 00:08:55.337 12199.778 - 12250.191: 88.7301% ( 46) 00:08:55.337 12250.191 - 12300.603: 89.0957% ( 44) 00:08:55.337 12300.603 - 12351.015: 89.4116% ( 38) 00:08:55.337 12351.015 - 12401.428: 89.7274% ( 38) 00:08:55.337 12401.428 - 12451.840: 89.9435% ( 26) 00:08:55.337 12451.840 - 12502.252: 90.1679% ( 27) 00:08:55.337 12502.252 - 12552.665: 90.3590% ( 23) 00:08:55.337 12552.665 - 12603.077: 90.6084% ( 30) 00:08:55.337 12603.077 - 12653.489: 90.8577% ( 30) 00:08:55.337 12653.489 - 12703.902: 91.0987% ( 29) 00:08:55.337 12703.902 - 12754.314: 91.3231% ( 27) 00:08:55.337 12754.314 - 12804.726: 91.5475% ( 27) 00:08:55.337 12804.726 - 12855.138: 91.7969% ( 30) 00:08:55.337 12855.138 - 12905.551: 92.0628% ( 32) 00:08:55.337 12905.551 - 13006.375: 92.6031% ( 65) 00:08:55.337 13006.375 - 13107.200: 93.1184% ( 62) 00:08:55.337 13107.200 - 13208.025: 93.5422% ( 51) 00:08:55.337 13208.025 - 13308.849: 93.8664% ( 39) 00:08:55.337 13308.849 - 13409.674: 94.2320% ( 44) 00:08:55.337 13409.674 - 13510.498: 94.5811% ( 42) 00:08:55.337 13510.498 - 13611.323: 94.8637% ( 34) 00:08:55.337 13611.323 - 13712.148: 95.1546% ( 35) 00:08:55.337 13712.148 - 13812.972: 95.4621% ( 37) 00:08:55.337 13812.972 - 13913.797: 95.7281% ( 32) 00:08:55.337 13913.797 - 14014.622: 95.9441% ( 26) 00:08:55.337 14014.622 - 14115.446: 96.1519% ( 25) 00:08:55.337 14115.446 - 14216.271: 96.3431% ( 23) 00:08:55.337 14216.271 - 14317.095: 96.5093% ( 20) 00:08:55.337 14317.095 - 14417.920: 96.6922% ( 22) 00:08:55.337 14417.920 - 14518.745: 96.9249% ( 28) 00:08:55.337 14518.745 - 14619.569: 97.1742% ( 30) 00:08:55.337 14619.569 - 14720.394: 97.3654% ( 23) 00:08:55.337 14720.394 - 14821.218: 97.5150% ( 18) 00:08:55.337 14821.218 - 14922.043: 97.6562% ( 17) 00:08:55.337 14922.043 - 15022.868: 97.7892% ( 16) 00:08:55.337 15022.868 - 15123.692: 97.9305% ( 17) 00:08:55.337 15123.692 - 15224.517: 98.0635% ( 16) 00:08:55.337 15224.517 - 15325.342: 98.2131% ( 18) 00:08:55.337 15325.342 - 15426.166: 98.3876% ( 21) 00:08:55.337 15426.166 - 15526.991: 98.5123% ( 15) 00:08:55.337 15526.991 - 15627.815: 98.5788% ( 8) 00:08:55.337 15627.815 - 15728.640: 98.6287% ( 6) 00:08:55.337 15728.640 - 15829.465: 98.6868% ( 7) 00:08:55.337 15829.465 - 15930.289: 98.7367% ( 6) 00:08:55.337 15930.289 - 16031.114: 98.7866% ( 6) 00:08:55.337 16031.114 - 16131.938: 98.8364% ( 6) 00:08:55.337 16131.938 - 16232.763: 98.8780% ( 5) 00:08:55.337 16232.763 - 16333.588: 98.9112% ( 4) 00:08:55.337 16333.588 - 16434.412: 98.9362% ( 3) 00:08:55.337 28634.191 - 28835.840: 98.9445% ( 1) 00:08:55.337 28835.840 - 29037.489: 99.0027% ( 7) 00:08:55.337 29037.489 - 29239.138: 99.0442% ( 5) 00:08:55.337 29239.138 - 29440.788: 99.0941% ( 6) 00:08:55.337 29440.788 - 29642.437: 99.1606% ( 8) 00:08:55.337 29642.437 - 29844.086: 99.2188% ( 7) 00:08:55.337 29844.086 - 30045.735: 99.2769% ( 7) 00:08:55.337 30045.735 - 30247.385: 99.3434% ( 8) 00:08:55.337 30247.385 - 30449.034: 99.4099% ( 8) 00:08:55.337 30449.034 - 30650.683: 99.4681% ( 7) 00:08:55.337 36901.809 - 37103.458: 99.4930% ( 3) 00:08:55.337 37103.458 - 37305.108: 99.5512% ( 7) 00:08:55.337 37305.108 - 37506.757: 99.6011% ( 6) 00:08:55.337 37506.757 - 37708.406: 99.6592% ( 7) 00:08:55.337 37708.406 - 37910.055: 99.7257% ( 8) 00:08:55.337 37910.055 - 38111.705: 99.7839% ( 7) 00:08:55.337 38111.705 - 38313.354: 99.8421% ( 7) 00:08:55.337 38313.354 - 38515.003: 99.9003% ( 7) 00:08:55.337 38515.003 - 38716.652: 99.9501% ( 6) 00:08:55.337 38716.652 - 38918.302: 100.0000% ( 6) 00:08:55.337 00:08:55.337 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:55.337 ============================================================================== 00:08:55.337 Range in us Cumulative IO count 00:08:55.337 8418.855 - 8469.268: 0.0416% ( 5) 00:08:55.337 8469.268 - 8519.680: 0.0831% ( 5) 00:08:55.337 8519.680 - 8570.092: 0.1496% ( 8) 00:08:55.337 8570.092 - 8620.505: 0.3823% ( 28) 00:08:55.337 8620.505 - 8670.917: 0.7314% ( 42) 00:08:55.337 8670.917 - 8721.329: 1.2051% ( 57) 00:08:55.337 8721.329 - 8771.742: 1.7038% ( 60) 00:08:55.337 8771.742 - 8822.154: 2.2939% ( 71) 00:08:55.337 8822.154 - 8872.566: 3.0585% ( 92) 00:08:55.337 8872.566 - 8922.978: 4.0475% ( 119) 00:08:55.337 8922.978 - 8973.391: 5.0615% ( 122) 00:08:55.337 8973.391 - 9023.803: 6.3165% ( 151) 00:08:55.337 9023.803 - 9074.215: 7.7626% ( 174) 00:08:55.337 9074.215 - 9124.628: 9.5329% ( 213) 00:08:55.337 9124.628 - 9175.040: 11.3614% ( 220) 00:08:55.337 9175.040 - 9225.452: 13.3976% ( 245) 00:08:55.337 9225.452 - 9275.865: 15.5253% ( 256) 00:08:55.337 9275.865 - 9326.277: 17.8939% ( 285) 00:08:55.337 9326.277 - 9376.689: 20.5286% ( 317) 00:08:55.337 9376.689 - 9427.102: 22.9970% ( 297) 00:08:55.337 9427.102 - 9477.514: 25.4820% ( 299) 00:08:55.337 9477.514 - 9527.926: 28.0170% ( 305) 00:08:55.337 9527.926 - 9578.338: 30.6350% ( 315) 00:08:55.337 9578.338 - 9628.751: 33.3112% ( 322) 00:08:55.337 9628.751 - 9679.163: 35.9292% ( 315) 00:08:55.337 9679.163 - 9729.575: 38.6885% ( 332) 00:08:55.337 9729.575 - 9779.988: 41.2982% ( 314) 00:08:55.337 9779.988 - 9830.400: 43.8996% ( 313) 00:08:55.337 9830.400 - 9880.812: 46.5342% ( 317) 00:08:55.337 9880.812 - 9931.225: 49.0276% ( 300) 00:08:55.337 9931.225 - 9981.637: 51.3547% ( 280) 00:08:55.337 9981.637 - 10032.049: 53.6154% ( 272) 00:08:55.337 10032.049 - 10082.462: 55.7098% ( 252) 00:08:55.337 10082.462 - 10132.874: 57.6878% ( 238) 00:08:55.337 10132.874 - 10183.286: 59.4581% ( 213) 00:08:55.337 10183.286 - 10233.698: 61.3115% ( 223) 00:08:55.337 10233.698 - 10284.111: 62.7327% ( 171) 00:08:55.337 10284.111 - 10334.523: 64.1705% ( 173) 00:08:55.337 10334.523 - 10384.935: 65.5834% ( 170) 00:08:55.337 10384.935 - 10435.348: 66.8551% ( 153) 00:08:55.337 10435.348 - 10485.760: 67.9937% ( 137) 00:08:55.337 10485.760 - 10536.172: 68.9910% ( 120) 00:08:55.337 10536.172 - 10586.585: 69.8471% ( 103) 00:08:55.337 10586.585 - 10636.997: 70.7281% ( 106) 00:08:55.337 10636.997 - 10687.409: 71.5675% ( 101) 00:08:55.337 10687.409 - 10737.822: 72.3986% ( 100) 00:08:55.337 10737.822 - 10788.234: 73.2380% ( 101) 00:08:55.337 10788.234 - 10838.646: 74.0193% ( 94) 00:08:55.337 10838.646 - 10889.058: 74.8088% ( 95) 00:08:55.337 10889.058 - 10939.471: 75.5153% ( 85) 00:08:55.337 10939.471 - 10989.883: 76.2550% ( 89) 00:08:55.337 10989.883 - 11040.295: 76.8035% ( 66) 00:08:55.337 11040.295 - 11090.708: 77.4435% ( 77) 00:08:55.337 11090.708 - 11141.120: 77.9837% ( 65) 00:08:55.337 11141.120 - 11191.532: 78.5489% ( 68) 00:08:55.337 11191.532 - 11241.945: 79.0642% ( 62) 00:08:55.337 11241.945 - 11292.357: 79.6127% ( 66) 00:08:55.337 11292.357 - 11342.769: 80.2028% ( 71) 00:08:55.337 11342.769 - 11393.182: 80.7513% ( 66) 00:08:55.337 11393.182 - 11443.594: 81.3996% ( 78) 00:08:55.337 11443.594 - 11494.006: 81.9814% ( 70) 00:08:55.337 11494.006 - 11544.418: 82.5881% ( 73) 00:08:55.337 11544.418 - 11594.831: 83.1782% ( 71) 00:08:55.337 11594.831 - 11645.243: 83.7101% ( 64) 00:08:55.337 11645.243 - 11695.655: 84.2088% ( 60) 00:08:55.337 11695.655 - 11746.068: 84.6825% ( 57) 00:08:55.337 11746.068 - 11796.480: 85.2227% ( 65) 00:08:55.337 11796.480 - 11846.892: 85.7048% ( 58) 00:08:55.338 11846.892 - 11897.305: 86.2201% ( 62) 00:08:55.338 11897.305 - 11947.717: 86.7021% ( 58) 00:08:55.338 11947.717 - 11998.129: 87.1260% ( 51) 00:08:55.338 11998.129 - 12048.542: 87.5166% ( 47) 00:08:55.338 12048.542 - 12098.954: 87.8823% ( 44) 00:08:55.338 12098.954 - 12149.366: 88.2729% ( 47) 00:08:55.338 12149.366 - 12199.778: 88.6553% ( 46) 00:08:55.338 12199.778 - 12250.191: 88.9794% ( 39) 00:08:55.338 12250.191 - 12300.603: 89.3617% ( 46) 00:08:55.338 12300.603 - 12351.015: 89.6692% ( 37) 00:08:55.338 12351.015 - 12401.428: 89.9684% ( 36) 00:08:55.338 12401.428 - 12451.840: 90.2261% ( 31) 00:08:55.338 12451.840 - 12502.252: 90.4422% ( 26) 00:08:55.338 12502.252 - 12552.665: 90.6167% ( 21) 00:08:55.338 12552.665 - 12603.077: 90.8078% ( 23) 00:08:55.338 12603.077 - 12653.489: 91.0073% ( 24) 00:08:55.338 12653.489 - 12703.902: 91.2566% ( 30) 00:08:55.338 12703.902 - 12754.314: 91.4478% ( 23) 00:08:55.338 12754.314 - 12804.726: 91.6223% ( 21) 00:08:55.338 12804.726 - 12855.138: 91.8052% ( 22) 00:08:55.338 12855.138 - 12905.551: 92.0130% ( 25) 00:08:55.338 12905.551 - 13006.375: 92.3870% ( 45) 00:08:55.338 13006.375 - 13107.200: 92.9854% ( 72) 00:08:55.338 13107.200 - 13208.025: 93.4508% ( 56) 00:08:55.338 13208.025 - 13308.849: 93.9162% ( 56) 00:08:55.338 13308.849 - 13409.674: 94.3816% ( 56) 00:08:55.338 13409.674 - 13510.498: 94.7972% ( 50) 00:08:55.338 13510.498 - 13611.323: 95.1297% ( 40) 00:08:55.338 13611.323 - 13712.148: 95.4704% ( 41) 00:08:55.338 13712.148 - 13812.972: 95.7197% ( 30) 00:08:55.338 13812.972 - 13913.797: 95.9608% ( 29) 00:08:55.338 13913.797 - 14014.622: 96.1852% ( 27) 00:08:55.338 14014.622 - 14115.446: 96.2766% ( 11) 00:08:55.338 14115.446 - 14216.271: 96.3514% ( 9) 00:08:55.338 14216.271 - 14317.095: 96.4345% ( 10) 00:08:55.338 14317.095 - 14417.920: 96.5176% ( 10) 00:08:55.338 14417.920 - 14518.745: 96.6090% ( 11) 00:08:55.338 14518.745 - 14619.569: 96.8085% ( 24) 00:08:55.338 14619.569 - 14720.394: 97.0412% ( 28) 00:08:55.338 14720.394 - 14821.218: 97.2822% ( 29) 00:08:55.338 14821.218 - 14922.043: 97.4900% ( 25) 00:08:55.338 14922.043 - 15022.868: 97.7061% ( 26) 00:08:55.338 15022.868 - 15123.692: 97.9139% ( 25) 00:08:55.338 15123.692 - 15224.517: 98.1134% ( 24) 00:08:55.338 15224.517 - 15325.342: 98.3128% ( 24) 00:08:55.338 15325.342 - 15426.166: 98.5040% ( 23) 00:08:55.338 15426.166 - 15526.991: 98.7201% ( 26) 00:08:55.338 15526.991 - 15627.815: 98.8614% ( 17) 00:08:55.338 15627.815 - 15728.640: 98.9112% ( 6) 00:08:55.338 15728.640 - 15829.465: 98.9362% ( 3) 00:08:55.338 27424.295 - 27625.945: 98.9528% ( 2) 00:08:55.338 27625.945 - 27827.594: 99.0110% ( 7) 00:08:55.338 27827.594 - 28029.243: 99.0775% ( 8) 00:08:55.338 28029.243 - 28230.892: 99.1439% ( 8) 00:08:55.338 28230.892 - 28432.542: 99.2021% ( 7) 00:08:55.338 28432.542 - 28634.191: 99.2686% ( 8) 00:08:55.338 28634.191 - 28835.840: 99.3268% ( 7) 00:08:55.338 28835.840 - 29037.489: 99.3850% ( 7) 00:08:55.338 29037.489 - 29239.138: 99.4515% ( 8) 00:08:55.338 29239.138 - 29440.788: 99.4681% ( 2) 00:08:55.338 35288.615 - 35490.265: 99.4930% ( 3) 00:08:55.338 35490.265 - 35691.914: 99.5512% ( 7) 00:08:55.338 35691.914 - 35893.563: 99.6094% ( 7) 00:08:55.338 35893.563 - 36095.212: 99.6676% ( 7) 00:08:55.338 36095.212 - 36296.862: 99.7257% ( 7) 00:08:55.338 36296.862 - 36498.511: 99.7839% ( 7) 00:08:55.338 36498.511 - 36700.160: 99.8338% ( 6) 00:08:55.338 36700.160 - 36901.809: 99.9003% ( 8) 00:08:55.338 36901.809 - 37103.458: 99.9584% ( 7) 00:08:55.338 37103.458 - 37305.108: 100.0000% ( 5) 00:08:55.338 00:08:55.338 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:55.338 ============================================================================== 00:08:55.338 Range in us Cumulative IO count 00:08:55.338 8519.680 - 8570.092: 0.0083% ( 1) 00:08:55.338 8570.092 - 8620.505: 0.1828% ( 21) 00:08:55.338 8620.505 - 8670.917: 0.5153% ( 40) 00:08:55.338 8670.917 - 8721.329: 0.9807% ( 56) 00:08:55.338 8721.329 - 8771.742: 1.5625% ( 70) 00:08:55.338 8771.742 - 8822.154: 2.2357% ( 81) 00:08:55.338 8822.154 - 8872.566: 3.0668% ( 100) 00:08:55.338 8872.566 - 8922.978: 3.9727% ( 109) 00:08:55.338 8922.978 - 8973.391: 5.0947% ( 135) 00:08:55.338 8973.391 - 9023.803: 6.4578% ( 164) 00:08:55.338 9023.803 - 9074.215: 7.9704% ( 182) 00:08:55.338 9074.215 - 9124.628: 9.5495% ( 190) 00:08:55.338 9124.628 - 9175.040: 11.4029% ( 223) 00:08:55.338 9175.040 - 9225.452: 13.6553% ( 271) 00:08:55.338 9225.452 - 9275.865: 15.9242% ( 273) 00:08:55.338 9275.865 - 9326.277: 18.2181% ( 276) 00:08:55.338 9326.277 - 9376.689: 20.5618% ( 282) 00:08:55.338 9376.689 - 9427.102: 23.0801% ( 303) 00:08:55.338 9427.102 - 9477.514: 25.6649% ( 311) 00:08:55.338 9477.514 - 9527.926: 28.3328% ( 321) 00:08:55.338 9527.926 - 9578.338: 31.0921% ( 332) 00:08:55.338 9578.338 - 9628.751: 33.7600% ( 321) 00:08:55.338 9628.751 - 9679.163: 36.6938% ( 353) 00:08:55.338 9679.163 - 9729.575: 39.5279% ( 341) 00:08:55.338 9729.575 - 9779.988: 42.2955% ( 333) 00:08:55.338 9779.988 - 9830.400: 44.8886% ( 312) 00:08:55.338 9830.400 - 9880.812: 47.4318% ( 306) 00:08:55.338 9880.812 - 9931.225: 49.8172% ( 287) 00:08:55.338 9931.225 - 9981.637: 52.0362% ( 267) 00:08:55.338 9981.637 - 10032.049: 54.3717% ( 281) 00:08:55.338 10032.049 - 10082.462: 56.4827% ( 254) 00:08:55.338 10082.462 - 10132.874: 58.4275% ( 234) 00:08:55.338 10132.874 - 10183.286: 60.0233% ( 192) 00:08:55.338 10183.286 - 10233.698: 61.5608% ( 185) 00:08:55.338 10233.698 - 10284.111: 62.9654% ( 169) 00:08:55.338 10284.111 - 10334.523: 64.3451% ( 166) 00:08:55.338 10334.523 - 10384.935: 65.7497% ( 169) 00:08:55.338 10384.935 - 10435.348: 66.9299% ( 142) 00:08:55.338 10435.348 - 10485.760: 68.0436% ( 134) 00:08:55.338 10485.760 - 10536.172: 69.0243% ( 118) 00:08:55.338 10536.172 - 10586.585: 69.9053% ( 106) 00:08:55.338 10586.585 - 10636.997: 70.6948% ( 95) 00:08:55.338 10636.997 - 10687.409: 71.4927% ( 96) 00:08:55.338 10687.409 - 10737.822: 72.2989% ( 97) 00:08:55.338 10737.822 - 10788.234: 73.0053% ( 85) 00:08:55.338 10788.234 - 10838.646: 73.7035% ( 84) 00:08:55.338 10838.646 - 10889.058: 74.3600% ( 79) 00:08:55.338 10889.058 - 10939.471: 75.0249% ( 80) 00:08:55.338 10939.471 - 10989.883: 75.6316% ( 73) 00:08:55.338 10989.883 - 11040.295: 76.3880% ( 91) 00:08:55.338 11040.295 - 11090.708: 77.1110% ( 87) 00:08:55.338 11090.708 - 11141.120: 77.7759% ( 80) 00:08:55.338 11141.120 - 11191.532: 78.5239% ( 90) 00:08:55.338 11191.532 - 11241.945: 79.2387% ( 86) 00:08:55.338 11241.945 - 11292.357: 79.8703% ( 76) 00:08:55.338 11292.357 - 11342.769: 80.5519% ( 82) 00:08:55.338 11342.769 - 11393.182: 81.1669% ( 74) 00:08:55.338 11393.182 - 11443.594: 81.7736% ( 73) 00:08:55.338 11443.594 - 11494.006: 82.3554% ( 70) 00:08:55.338 11494.006 - 11544.418: 82.9455% ( 71) 00:08:55.338 11544.418 - 11594.831: 83.6270% ( 82) 00:08:55.338 11594.831 - 11645.243: 84.2670% ( 77) 00:08:55.338 11645.243 - 11695.655: 84.8321% ( 68) 00:08:55.338 11695.655 - 11746.068: 85.3142% ( 58) 00:08:55.338 11746.068 - 11796.480: 85.7713% ( 55) 00:08:55.338 11796.480 - 11846.892: 86.1951% ( 51) 00:08:55.338 11846.892 - 11897.305: 86.6024% ( 49) 00:08:55.338 11897.305 - 11947.717: 86.9681% ( 44) 00:08:55.338 11947.717 - 11998.129: 87.2839% ( 38) 00:08:55.338 11998.129 - 12048.542: 87.5914% ( 37) 00:08:55.338 12048.542 - 12098.954: 87.9156% ( 39) 00:08:55.338 12098.954 - 12149.366: 88.2397% ( 39) 00:08:55.338 12149.366 - 12199.778: 88.5721% ( 40) 00:08:55.338 12199.778 - 12250.191: 88.8713% ( 36) 00:08:55.338 12250.191 - 12300.603: 89.1290% ( 31) 00:08:55.338 12300.603 - 12351.015: 89.4448% ( 38) 00:08:55.338 12351.015 - 12401.428: 89.7773% ( 40) 00:08:55.338 12401.428 - 12451.840: 90.1097% ( 40) 00:08:55.338 12451.840 - 12502.252: 90.3507% ( 29) 00:08:55.338 12502.252 - 12552.665: 90.5918% ( 29) 00:08:55.338 12552.665 - 12603.077: 90.8245% ( 28) 00:08:55.338 12603.077 - 12653.489: 91.0239% ( 24) 00:08:55.338 12653.489 - 12703.902: 91.2317% ( 25) 00:08:55.338 12703.902 - 12754.314: 91.4312% ( 24) 00:08:55.338 12754.314 - 12804.726: 91.6390% ( 25) 00:08:55.338 12804.726 - 12855.138: 91.8551% ( 26) 00:08:55.338 12855.138 - 12905.551: 92.0711% ( 26) 00:08:55.339 12905.551 - 13006.375: 92.5116% ( 53) 00:08:55.339 13006.375 - 13107.200: 92.9023% ( 47) 00:08:55.339 13107.200 - 13208.025: 93.2929% ( 47) 00:08:55.339 13208.025 - 13308.849: 93.6170% ( 39) 00:08:55.339 13308.849 - 13409.674: 93.8996% ( 34) 00:08:55.339 13409.674 - 13510.498: 94.2237% ( 39) 00:08:55.339 13510.498 - 13611.323: 94.5063% ( 34) 00:08:55.339 13611.323 - 13712.148: 94.8138% ( 37) 00:08:55.339 13712.148 - 13812.972: 95.1380% ( 39) 00:08:55.339 13812.972 - 13913.797: 95.4289% ( 35) 00:08:55.339 13913.797 - 14014.622: 95.7530% ( 39) 00:08:55.339 14014.622 - 14115.446: 96.1021% ( 42) 00:08:55.339 14115.446 - 14216.271: 96.3846% ( 34) 00:08:55.339 14216.271 - 14317.095: 96.5924% ( 25) 00:08:55.339 14317.095 - 14417.920: 96.7753% ( 22) 00:08:55.339 14417.920 - 14518.745: 96.9249% ( 18) 00:08:55.339 14518.745 - 14619.569: 97.0828% ( 19) 00:08:55.339 14619.569 - 14720.394: 97.2906% ( 25) 00:08:55.339 14720.394 - 14821.218: 97.4817% ( 23) 00:08:55.339 14821.218 - 14922.043: 97.6562% ( 21) 00:08:55.339 14922.043 - 15022.868: 97.7809% ( 15) 00:08:55.339 15022.868 - 15123.692: 97.8890% ( 13) 00:08:55.339 15123.692 - 15224.517: 97.9887% ( 12) 00:08:55.339 15224.517 - 15325.342: 98.0884% ( 12) 00:08:55.339 15325.342 - 15426.166: 98.2713% ( 22) 00:08:55.339 15426.166 - 15526.991: 98.4126% ( 17) 00:08:55.339 15526.991 - 15627.815: 98.5622% ( 18) 00:08:55.339 15627.815 - 15728.640: 98.6370% ( 9) 00:08:55.339 15728.640 - 15829.465: 98.6951% ( 7) 00:08:55.339 15829.465 - 15930.289: 98.7367% ( 5) 00:08:55.339 15930.289 - 16031.114: 98.7783% ( 5) 00:08:55.339 16031.114 - 16131.938: 98.8281% ( 6) 00:08:55.339 16131.938 - 16232.763: 98.8780% ( 6) 00:08:55.339 16232.763 - 16333.588: 98.9279% ( 6) 00:08:55.339 16333.588 - 16434.412: 98.9362% ( 1) 00:08:55.339 25811.102 - 26012.751: 98.9611% ( 3) 00:08:55.339 26012.751 - 26214.400: 99.0193% ( 7) 00:08:55.339 26214.400 - 26416.049: 99.0858% ( 8) 00:08:55.339 26416.049 - 26617.698: 99.1439% ( 7) 00:08:55.339 26617.698 - 26819.348: 99.2104% ( 8) 00:08:55.339 26819.348 - 27020.997: 99.2686% ( 7) 00:08:55.339 27020.997 - 27222.646: 99.3351% ( 8) 00:08:55.339 27222.646 - 27424.295: 99.3933% ( 7) 00:08:55.339 27424.295 - 27625.945: 99.4598% ( 8) 00:08:55.339 27625.945 - 27827.594: 99.4681% ( 1) 00:08:55.339 33877.071 - 34078.720: 99.5263% ( 7) 00:08:55.339 34078.720 - 34280.369: 99.5761% ( 6) 00:08:55.339 34280.369 - 34482.018: 99.6343% ( 7) 00:08:55.339 34482.018 - 34683.668: 99.6925% ( 7) 00:08:55.339 34683.668 - 34885.317: 99.7507% ( 7) 00:08:55.339 34885.317 - 35086.966: 99.8088% ( 7) 00:08:55.339 35086.966 - 35288.615: 99.8670% ( 7) 00:08:55.339 35288.615 - 35490.265: 99.9252% ( 7) 00:08:55.339 35490.265 - 35691.914: 99.9834% ( 7) 00:08:55.339 35691.914 - 35893.563: 100.0000% ( 2) 00:08:55.339 00:08:55.339 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:55.339 ============================================================================== 00:08:55.339 Range in us Cumulative IO count 00:08:55.339 8469.268 - 8519.680: 0.0661% ( 8) 00:08:55.339 8570.092 - 8620.505: 0.1984% ( 16) 00:08:55.339 8620.505 - 8670.917: 0.4960% ( 36) 00:08:55.339 8670.917 - 8721.329: 0.9507% ( 55) 00:08:55.339 8721.329 - 8771.742: 1.5377% ( 71) 00:08:55.339 8771.742 - 8822.154: 2.1412% ( 73) 00:08:55.339 8822.154 - 8872.566: 2.9101% ( 93) 00:08:55.339 8872.566 - 8922.978: 3.8029% ( 108) 00:08:55.339 8922.978 - 8973.391: 4.9190% ( 135) 00:08:55.339 8973.391 - 9023.803: 6.3244% ( 170) 00:08:55.339 9023.803 - 9074.215: 7.9365% ( 195) 00:08:55.339 9074.215 - 9124.628: 9.7470% ( 219) 00:08:55.339 9124.628 - 9175.040: 11.7063% ( 237) 00:08:55.339 9175.040 - 9225.452: 13.7483% ( 247) 00:08:55.339 9225.452 - 9275.865: 16.1293% ( 288) 00:08:55.339 9275.865 - 9326.277: 18.4358% ( 279) 00:08:55.339 9326.277 - 9376.689: 20.8085% ( 287) 00:08:55.339 9376.689 - 9427.102: 23.3052% ( 302) 00:08:55.339 9427.102 - 9477.514: 25.8102% ( 303) 00:08:55.339 9477.514 - 9527.926: 28.2573% ( 296) 00:08:55.339 9527.926 - 9578.338: 30.8036% ( 308) 00:08:55.339 9578.338 - 9628.751: 33.5896% ( 337) 00:08:55.339 9628.751 - 9679.163: 36.3261% ( 331) 00:08:55.339 9679.163 - 9729.575: 39.2030% ( 348) 00:08:55.339 9729.575 - 9779.988: 41.8733% ( 323) 00:08:55.339 9779.988 - 9830.400: 44.5437% ( 323) 00:08:55.339 9830.400 - 9880.812: 47.1147% ( 311) 00:08:55.339 9880.812 - 9931.225: 49.5122% ( 290) 00:08:55.339 9931.225 - 9981.637: 51.6617% ( 260) 00:08:55.339 9981.637 - 10032.049: 53.7698% ( 255) 00:08:55.339 10032.049 - 10082.462: 55.7209% ( 236) 00:08:55.339 10082.462 - 10132.874: 57.4074% ( 204) 00:08:55.339 10132.874 - 10183.286: 59.0443% ( 198) 00:08:55.339 10183.286 - 10233.698: 60.6316% ( 192) 00:08:55.339 10233.698 - 10284.111: 62.1032% ( 178) 00:08:55.339 10284.111 - 10334.523: 63.4673% ( 165) 00:08:55.339 10334.523 - 10384.935: 64.7735% ( 158) 00:08:55.339 10384.935 - 10435.348: 66.0384% ( 153) 00:08:55.339 10435.348 - 10485.760: 67.2784% ( 150) 00:08:55.339 10485.760 - 10536.172: 68.6095% ( 161) 00:08:55.339 10536.172 - 10586.585: 69.7917% ( 143) 00:08:55.339 10586.585 - 10636.997: 70.8581% ( 129) 00:08:55.339 10636.997 - 10687.409: 71.7510% ( 108) 00:08:55.339 10687.409 - 10737.822: 72.6521% ( 109) 00:08:55.339 10737.822 - 10788.234: 73.4375% ( 95) 00:08:55.339 10788.234 - 10838.646: 74.2477% ( 98) 00:08:55.339 10838.646 - 10889.058: 75.0909% ( 102) 00:08:55.339 10889.058 - 10939.471: 75.8185% ( 88) 00:08:55.339 10939.471 - 10989.883: 76.5212% ( 85) 00:08:55.339 10989.883 - 11040.295: 77.1991% ( 82) 00:08:55.339 11040.295 - 11090.708: 77.8770% ( 82) 00:08:55.339 11090.708 - 11141.120: 78.4722% ( 72) 00:08:55.339 11141.120 - 11191.532: 79.0427% ( 69) 00:08:55.339 11191.532 - 11241.945: 79.5635% ( 63) 00:08:55.339 11241.945 - 11292.357: 80.1670% ( 73) 00:08:55.339 11292.357 - 11342.769: 80.7540% ( 71) 00:08:55.339 11342.769 - 11393.182: 81.3409% ( 71) 00:08:55.339 11393.182 - 11443.594: 81.9114% ( 69) 00:08:55.339 11443.594 - 11494.006: 82.4653% ( 67) 00:08:55.339 11494.006 - 11544.418: 82.9778% ( 62) 00:08:55.339 11544.418 - 11594.831: 83.4325% ( 55) 00:08:55.339 11594.831 - 11645.243: 83.9534% ( 63) 00:08:55.339 11645.243 - 11695.655: 84.3998% ( 54) 00:08:55.339 11695.655 - 11746.068: 84.7966% ( 48) 00:08:55.339 11746.068 - 11796.480: 85.1604% ( 44) 00:08:55.339 11796.480 - 11846.892: 85.5076% ( 42) 00:08:55.339 11846.892 - 11897.305: 85.8300% ( 39) 00:08:55.339 11897.305 - 11947.717: 86.1855% ( 43) 00:08:55.339 11947.717 - 11998.129: 86.5493% ( 44) 00:08:55.339 11998.129 - 12048.542: 86.8800% ( 40) 00:08:55.339 12048.542 - 12098.954: 87.2024% ( 39) 00:08:55.339 12098.954 - 12149.366: 87.5083% ( 37) 00:08:55.339 12149.366 - 12199.778: 87.8059% ( 36) 00:08:55.339 12199.778 - 12250.191: 88.0622% ( 31) 00:08:55.339 12250.191 - 12300.603: 88.3433% ( 34) 00:08:55.339 12300.603 - 12351.015: 88.6409% ( 36) 00:08:55.339 12351.015 - 12401.428: 88.9798% ( 41) 00:08:55.339 12401.428 - 12451.840: 89.2444% ( 32) 00:08:55.339 12451.840 - 12502.252: 89.4428% ( 24) 00:08:55.339 12502.252 - 12552.665: 89.6164% ( 21) 00:08:55.339 12552.665 - 12603.077: 89.8231% ( 25) 00:08:55.339 12603.077 - 12653.489: 90.0050% ( 22) 00:08:55.339 12653.489 - 12703.902: 90.1620% ( 19) 00:08:55.339 12703.902 - 12754.314: 90.3439% ( 22) 00:08:55.340 12754.314 - 12804.726: 90.5671% ( 27) 00:08:55.340 12804.726 - 12855.138: 90.7986% ( 28) 00:08:55.340 12855.138 - 12905.551: 91.0218% ( 27) 00:08:55.340 12905.551 - 13006.375: 91.4104% ( 47) 00:08:55.340 13006.375 - 13107.200: 91.8403% ( 52) 00:08:55.340 13107.200 - 13208.025: 92.2536% ( 50) 00:08:55.340 13208.025 - 13308.849: 92.7331% ( 58) 00:08:55.340 13308.849 - 13409.674: 93.1548% ( 51) 00:08:55.340 13409.674 - 13510.498: 93.5351% ( 46) 00:08:55.340 13510.498 - 13611.323: 93.8740% ( 41) 00:08:55.340 13611.323 - 13712.148: 94.2626% ( 47) 00:08:55.340 13712.148 - 13812.972: 94.7090% ( 54) 00:08:55.340 13812.972 - 13913.797: 95.0314% ( 39) 00:08:55.340 13913.797 - 14014.622: 95.3952% ( 44) 00:08:55.340 14014.622 - 14115.446: 95.7755% ( 46) 00:08:55.340 14115.446 - 14216.271: 96.1310% ( 43) 00:08:55.340 14216.271 - 14317.095: 96.4203% ( 35) 00:08:55.340 14317.095 - 14417.920: 96.6022% ( 22) 00:08:55.340 14417.920 - 14518.745: 96.7923% ( 23) 00:08:55.340 14518.745 - 14619.569: 97.0403% ( 30) 00:08:55.340 14619.569 - 14720.394: 97.2884% ( 30) 00:08:55.340 14720.394 - 14821.218: 97.5033% ( 26) 00:08:55.340 14821.218 - 14922.043: 97.6190% ( 14) 00:08:55.340 14922.043 - 15022.868: 97.7761% ( 19) 00:08:55.340 15022.868 - 15123.692: 97.8836% ( 13) 00:08:55.340 15123.692 - 15224.517: 97.9828% ( 12) 00:08:55.340 15224.517 - 15325.342: 98.0903% ( 13) 00:08:55.340 15325.342 - 15426.166: 98.1895% ( 12) 00:08:55.340 15426.166 - 15526.991: 98.2887% ( 12) 00:08:55.340 15526.991 - 15627.815: 98.4788% ( 23) 00:08:55.340 15627.815 - 15728.640: 98.5367% ( 7) 00:08:55.340 15728.640 - 15829.465: 98.5780% ( 5) 00:08:55.340 15829.465 - 15930.289: 98.6276% ( 6) 00:08:55.340 15930.289 - 16031.114: 98.7021% ( 9) 00:08:55.340 16031.114 - 16131.938: 98.7930% ( 11) 00:08:55.340 16131.938 - 16232.763: 98.8674% ( 9) 00:08:55.340 16232.763 - 16333.588: 98.9501% ( 10) 00:08:55.340 16333.588 - 16434.412: 99.0245% ( 9) 00:08:55.340 16434.412 - 16535.237: 99.1071% ( 10) 00:08:55.340 16535.237 - 16636.062: 99.1567% ( 6) 00:08:55.340 16636.062 - 16736.886: 99.1898% ( 4) 00:08:55.340 16736.886 - 16837.711: 99.2146% ( 3) 00:08:55.340 16837.711 - 16938.535: 99.2477% ( 4) 00:08:55.340 16938.535 - 17039.360: 99.2808% ( 4) 00:08:55.340 17039.360 - 17140.185: 99.3056% ( 3) 00:08:55.340 17140.185 - 17241.009: 99.3386% ( 4) 00:08:55.340 17241.009 - 17341.834: 99.3717% ( 4) 00:08:55.340 17341.834 - 17442.658: 99.3965% ( 3) 00:08:55.340 17442.658 - 17543.483: 99.4296% ( 4) 00:08:55.340 17543.483 - 17644.308: 99.4626% ( 4) 00:08:55.340 17644.308 - 17745.132: 99.4709% ( 1) 00:08:55.340 25710.277 - 25811.102: 99.4957% ( 3) 00:08:55.340 25811.102 - 26012.751: 99.5618% ( 8) 00:08:55.340 26012.751 - 26214.400: 99.6280% ( 8) 00:08:55.340 26214.400 - 26416.049: 99.6858% ( 7) 00:08:55.340 26416.049 - 26617.698: 99.7437% ( 7) 00:08:55.340 26617.698 - 26819.348: 99.8099% ( 8) 00:08:55.340 26819.348 - 27020.997: 99.8677% ( 7) 00:08:55.340 27020.997 - 27222.646: 99.9339% ( 8) 00:08:55.340 27222.646 - 27424.295: 99.9917% ( 7) 00:08:55.340 27424.295 - 27625.945: 100.0000% ( 1) 00:08:55.340 00:08:55.340 12:12:26 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:56.714 Initializing NVMe Controllers 00:08:56.714 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:56.714 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:56.714 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:56.714 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:56.714 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:56.714 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:56.714 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:56.714 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:56.714 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:56.714 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:56.714 Initialization complete. Launching workers. 00:08:56.714 ======================================================== 00:08:56.714 Latency(us) 00:08:56.714 Device Information : IOPS MiB/s Average min max 00:08:56.714 PCIE (0000:00:10.0) NSID 1 from core 0: 12781.51 149.78 10027.64 7737.10 35987.63 00:08:56.714 PCIE (0000:00:11.0) NSID 1 from core 0: 12781.51 149.78 10009.60 7857.82 33809.88 00:08:56.714 PCIE (0000:00:13.0) NSID 1 from core 0: 12781.51 149.78 9991.63 7776.84 32638.54 00:08:56.714 PCIE (0000:00:12.0) NSID 1 from core 0: 12781.51 149.78 9973.84 7720.82 30611.71 00:08:56.714 PCIE (0000:00:12.0) NSID 2 from core 0: 12781.51 149.78 9956.00 7721.27 28920.69 00:08:56.714 PCIE (0000:00:12.0) NSID 3 from core 0: 12781.51 149.78 9938.21 7759.14 27004.71 00:08:56.714 ======================================================== 00:08:56.714 Total : 76689.03 898.70 9982.82 7720.82 35987.63 00:08:56.714 00:08:56.714 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:56.714 ================================================================================= 00:08:56.714 1.00000% : 8015.557us 00:08:56.714 10.00000% : 8670.917us 00:08:56.714 25.00000% : 9074.215us 00:08:56.714 50.00000% : 9527.926us 00:08:56.714 75.00000% : 10284.111us 00:08:56.714 90.00000% : 11594.831us 00:08:56.714 95.00000% : 12401.428us 00:08:56.714 98.00000% : 14720.394us 00:08:56.714 99.00000% : 18450.905us 00:08:56.714 99.50000% : 28432.542us 00:08:56.714 99.90000% : 35691.914us 00:08:56.714 99.99000% : 36095.212us 00:08:56.714 99.99900% : 36095.212us 00:08:56.714 99.99990% : 36095.212us 00:08:56.715 99.99999% : 36095.212us 00:08:56.715 00:08:56.715 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:56.715 ================================================================================= 00:08:56.715 1.00000% : 8166.794us 00:08:56.715 10.00000% : 8670.917us 00:08:56.715 25.00000% : 9124.628us 00:08:56.715 50.00000% : 9527.926us 00:08:56.715 75.00000% : 10132.874us 00:08:56.715 90.00000% : 11594.831us 00:08:56.715 95.00000% : 12300.603us 00:08:56.715 98.00000% : 14720.394us 00:08:56.715 99.00000% : 18854.203us 00:08:56.715 99.50000% : 27827.594us 00:08:56.715 99.90000% : 33473.772us 00:08:56.715 99.99000% : 33877.071us 00:08:56.715 99.99900% : 33877.071us 00:08:56.715 99.99990% : 33877.071us 00:08:56.715 99.99999% : 33877.071us 00:08:56.715 00:08:56.715 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:56.715 ================================================================================= 00:08:56.715 1.00000% : 8166.794us 00:08:56.715 10.00000% : 8670.917us 00:08:56.715 25.00000% : 9074.215us 00:08:56.715 50.00000% : 9527.926us 00:08:56.715 75.00000% : 10183.286us 00:08:56.715 90.00000% : 11494.006us 00:08:56.715 95.00000% : 12300.603us 00:08:56.715 98.00000% : 14821.218us 00:08:56.715 99.00000% : 18955.028us 00:08:56.715 99.50000% : 26416.049us 00:08:56.715 99.90000% : 32465.526us 00:08:56.715 99.99000% : 32667.175us 00:08:56.715 99.99900% : 32667.175us 00:08:56.715 99.99990% : 32667.175us 00:08:56.715 99.99999% : 32667.175us 00:08:56.715 00:08:56.715 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:56.715 ================================================================================= 00:08:56.715 1.00000% : 8166.794us 00:08:56.715 10.00000% : 8670.917us 00:08:56.715 25.00000% : 9124.628us 00:08:56.715 50.00000% : 9527.926us 00:08:56.715 75.00000% : 10183.286us 00:08:56.715 90.00000% : 11494.006us 00:08:56.715 95.00000% : 12300.603us 00:08:56.715 98.00000% : 14216.271us 00:08:56.715 99.00000% : 18652.554us 00:08:56.715 99.50000% : 24903.680us 00:08:56.715 99.90000% : 30247.385us 00:08:56.715 99.99000% : 30650.683us 00:08:56.715 99.99900% : 30650.683us 00:08:56.715 99.99990% : 30650.683us 00:08:56.715 99.99999% : 30650.683us 00:08:56.715 00:08:56.715 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:56.715 ================================================================================= 00:08:56.715 1.00000% : 8116.382us 00:08:56.715 10.00000% : 8670.917us 00:08:56.715 25.00000% : 9124.628us 00:08:56.715 50.00000% : 9527.926us 00:08:56.715 75.00000% : 10233.698us 00:08:56.715 90.00000% : 11494.006us 00:08:56.715 95.00000% : 12300.603us 00:08:56.715 98.00000% : 14417.920us 00:08:56.715 99.00000% : 18148.431us 00:08:56.715 99.50000% : 22988.012us 00:08:56.715 99.90000% : 28634.191us 00:08:56.715 99.99000% : 29037.489us 00:08:56.715 99.99900% : 29037.489us 00:08:56.715 99.99990% : 29037.489us 00:08:56.715 99.99999% : 29037.489us 00:08:56.715 00:08:56.715 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:56.715 ================================================================================= 00:08:56.715 1.00000% : 8166.794us 00:08:56.715 10.00000% : 8721.329us 00:08:56.715 25.00000% : 9124.628us 00:08:56.715 50.00000% : 9527.926us 00:08:56.715 75.00000% : 10183.286us 00:08:56.715 90.00000% : 11594.831us 00:08:56.715 95.00000% : 12351.015us 00:08:56.715 98.00000% : 14317.095us 00:08:56.715 99.00000% : 17745.132us 00:08:56.715 99.50000% : 21475.643us 00:08:56.715 99.90000% : 26819.348us 00:08:56.715 99.99000% : 27020.997us 00:08:56.715 99.99900% : 27020.997us 00:08:56.715 99.99990% : 27020.997us 00:08:56.715 99.99999% : 27020.997us 00:08:56.715 00:08:56.715 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:56.715 ============================================================================== 00:08:56.715 Range in us Cumulative IO count 00:08:56.715 7713.083 - 7763.495: 0.0156% ( 2) 00:08:56.715 7763.495 - 7813.908: 0.0312% ( 2) 00:08:56.715 7813.908 - 7864.320: 0.1797% ( 19) 00:08:56.715 7864.320 - 7914.732: 0.3516% ( 22) 00:08:56.715 7914.732 - 7965.145: 0.7656% ( 53) 00:08:56.715 7965.145 - 8015.557: 1.1016% ( 43) 00:08:56.715 8015.557 - 8065.969: 1.4219% ( 41) 00:08:56.715 8065.969 - 8116.382: 1.8438% ( 54) 00:08:56.715 8116.382 - 8166.794: 2.3125% ( 60) 00:08:56.715 8166.794 - 8217.206: 2.9531% ( 82) 00:08:56.715 8217.206 - 8267.618: 3.6016% ( 83) 00:08:56.715 8267.618 - 8318.031: 4.2109% ( 78) 00:08:56.715 8318.031 - 8368.443: 4.8594% ( 83) 00:08:56.715 8368.443 - 8418.855: 5.8047% ( 121) 00:08:56.715 8418.855 - 8469.268: 6.8359% ( 132) 00:08:56.715 8469.268 - 8519.680: 7.8828% ( 134) 00:08:56.715 8519.680 - 8570.092: 8.8438% ( 123) 00:08:56.715 8570.092 - 8620.505: 9.9219% ( 138) 00:08:56.715 8620.505 - 8670.917: 11.1016% ( 151) 00:08:56.715 8670.917 - 8721.329: 12.8125% ( 219) 00:08:56.715 8721.329 - 8771.742: 14.3906% ( 202) 00:08:56.715 8771.742 - 8822.154: 16.3750% ( 254) 00:08:56.715 8822.154 - 8872.566: 18.1875% ( 232) 00:08:56.715 8872.566 - 8922.978: 20.2344% ( 262) 00:08:56.715 8922.978 - 8973.391: 22.4141% ( 279) 00:08:56.715 8973.391 - 9023.803: 24.6484% ( 286) 00:08:56.715 9023.803 - 9074.215: 27.0547% ( 308) 00:08:56.715 9074.215 - 9124.628: 29.5781% ( 323) 00:08:56.715 9124.628 - 9175.040: 32.0156% ( 312) 00:08:56.715 9175.040 - 9225.452: 34.4609% ( 313) 00:08:56.715 9225.452 - 9275.865: 37.0000% ( 325) 00:08:56.715 9275.865 - 9326.277: 39.9297% ( 375) 00:08:56.715 9326.277 - 9376.689: 42.8047% ( 368) 00:08:56.715 9376.689 - 9427.102: 45.7578% ( 378) 00:08:56.715 9427.102 - 9477.514: 48.5703% ( 360) 00:08:56.715 9477.514 - 9527.926: 51.0156% ( 313) 00:08:56.715 9527.926 - 9578.338: 53.4453% ( 311) 00:08:56.715 9578.338 - 9628.751: 55.8594% ( 309) 00:08:56.715 9628.751 - 9679.163: 58.0234% ( 277) 00:08:56.715 9679.163 - 9729.575: 60.0000% ( 253) 00:08:56.715 9729.575 - 9779.988: 61.7500% ( 224) 00:08:56.715 9779.988 - 9830.400: 63.3438% ( 204) 00:08:56.715 9830.400 - 9880.812: 64.9453% ( 205) 00:08:56.715 9880.812 - 9931.225: 66.5234% ( 202) 00:08:56.715 9931.225 - 9981.637: 68.1875% ( 213) 00:08:56.715 9981.637 - 10032.049: 69.5703% ( 177) 00:08:56.715 10032.049 - 10082.462: 70.9297% ( 174) 00:08:56.715 10082.462 - 10132.874: 72.2188% ( 165) 00:08:56.715 10132.874 - 10183.286: 73.4531% ( 158) 00:08:56.715 10183.286 - 10233.698: 74.5703% ( 143) 00:08:56.715 10233.698 - 10284.111: 75.5312% ( 123) 00:08:56.715 10284.111 - 10334.523: 76.4531% ( 118) 00:08:56.715 10334.523 - 10384.935: 77.4453% ( 127) 00:08:56.715 10384.935 - 10435.348: 78.3203% ( 112) 00:08:56.715 10435.348 - 10485.760: 79.2031% ( 113) 00:08:56.715 10485.760 - 10536.172: 79.9844% ( 100) 00:08:56.715 10536.172 - 10586.585: 80.5703% ( 75) 00:08:56.715 10586.585 - 10636.997: 81.2188% ( 83) 00:08:56.715 10636.997 - 10687.409: 81.7031% ( 62) 00:08:56.715 10687.409 - 10737.822: 82.2266% ( 67) 00:08:56.715 10737.822 - 10788.234: 82.8438% ( 79) 00:08:56.715 10788.234 - 10838.646: 83.3672% ( 67) 00:08:56.715 10838.646 - 10889.058: 83.7734% ( 52) 00:08:56.715 10889.058 - 10939.471: 84.1797% ( 52) 00:08:56.715 10939.471 - 10989.883: 84.5625% ( 49) 00:08:56.715 10989.883 - 11040.295: 84.9141% ( 45) 00:08:56.715 11040.295 - 11090.708: 85.3828% ( 60) 00:08:56.715 11090.708 - 11141.120: 85.8906% ( 65) 00:08:56.715 11141.120 - 11191.532: 86.3750% ( 62) 00:08:56.715 11191.532 - 11241.945: 87.0234% ( 83) 00:08:56.715 11241.945 - 11292.357: 87.6719% ( 83) 00:08:56.715 11292.357 - 11342.769: 88.1641% ( 63) 00:08:56.715 11342.769 - 11393.182: 88.7109% ( 70) 00:08:56.715 11393.182 - 11443.594: 89.1797% ( 60) 00:08:56.715 11443.594 - 11494.006: 89.5781% ( 51) 00:08:56.715 11494.006 - 11544.418: 89.9922% ( 53) 00:08:56.715 11544.418 - 11594.831: 90.3594% ( 47) 00:08:56.715 11594.831 - 11645.243: 90.7109% ( 45) 00:08:56.715 11645.243 - 11695.655: 91.0859% ( 48) 00:08:56.715 11695.655 - 11746.068: 91.4219% ( 43) 00:08:56.715 11746.068 - 11796.480: 91.6875% ( 34) 00:08:56.715 11796.480 - 11846.892: 92.0156% ( 42) 00:08:56.715 11846.892 - 11897.305: 92.3984% ( 49) 00:08:56.715 11897.305 - 11947.717: 92.6953% ( 38) 00:08:56.715 11947.717 - 11998.129: 92.9688% ( 35) 00:08:56.715 11998.129 - 12048.542: 93.2109% ( 31) 00:08:56.715 12048.542 - 12098.954: 93.4922% ( 36) 00:08:56.715 12098.954 - 12149.366: 93.8672% ( 48) 00:08:56.715 12149.366 - 12199.778: 94.1406% ( 35) 00:08:56.715 12199.778 - 12250.191: 94.4609% ( 41) 00:08:56.715 12250.191 - 12300.603: 94.7266% ( 34) 00:08:56.715 12300.603 - 12351.015: 94.9141% ( 24) 00:08:56.715 12351.015 - 12401.428: 95.1016% ( 24) 00:08:56.715 12401.428 - 12451.840: 95.3125% ( 27) 00:08:56.715 12451.840 - 12502.252: 95.4453% ( 17) 00:08:56.715 12502.252 - 12552.665: 95.6172% ( 22) 00:08:56.715 12552.665 - 12603.077: 95.8203% ( 26) 00:08:56.715 12603.077 - 12653.489: 96.0156% ( 25) 00:08:56.715 12653.489 - 12703.902: 96.1641% ( 19) 00:08:56.715 12703.902 - 12754.314: 96.2578% ( 12) 00:08:56.715 12754.314 - 12804.726: 96.3984% ( 18) 00:08:56.715 12804.726 - 12855.138: 96.4766% ( 10) 00:08:56.715 12855.138 - 12905.551: 96.5469% ( 9) 00:08:56.715 12905.551 - 13006.375: 96.6641% ( 15) 00:08:56.715 13006.375 - 13107.200: 96.8125% ( 19) 00:08:56.715 13107.200 - 13208.025: 96.9609% ( 19) 00:08:56.715 13208.025 - 13308.849: 97.0469% ( 11) 00:08:56.715 13308.849 - 13409.674: 97.1797% ( 17) 00:08:56.715 13409.674 - 13510.498: 97.2656% ( 11) 00:08:56.715 13510.498 - 13611.323: 97.3828% ( 15) 00:08:56.715 13611.323 - 13712.148: 97.4688% ( 11) 00:08:56.716 13712.148 - 13812.972: 97.6016% ( 17) 00:08:56.716 13812.972 - 13913.797: 97.7109% ( 14) 00:08:56.716 13913.797 - 14014.622: 97.7500% ( 5) 00:08:56.716 14014.622 - 14115.446: 97.7578% ( 1) 00:08:56.716 14115.446 - 14216.271: 97.7969% ( 5) 00:08:56.716 14216.271 - 14317.095: 97.8359% ( 5) 00:08:56.716 14317.095 - 14417.920: 97.8750% ( 5) 00:08:56.716 14417.920 - 14518.745: 97.9219% ( 6) 00:08:56.716 14518.745 - 14619.569: 97.9609% ( 5) 00:08:56.716 14619.569 - 14720.394: 98.0000% ( 5) 00:08:56.716 15325.342 - 15426.166: 98.0156% ( 2) 00:08:56.716 15426.166 - 15526.991: 98.0547% ( 5) 00:08:56.716 15526.991 - 15627.815: 98.0938% ( 5) 00:08:56.716 15627.815 - 15728.640: 98.1328% ( 5) 00:08:56.716 15728.640 - 15829.465: 98.1641% ( 4) 00:08:56.716 15829.465 - 15930.289: 98.2109% ( 6) 00:08:56.716 15930.289 - 16031.114: 98.2422% ( 4) 00:08:56.716 16031.114 - 16131.938: 98.3047% ( 8) 00:08:56.716 16131.938 - 16232.763: 98.3203% ( 2) 00:08:56.716 16232.763 - 16333.588: 98.3750% ( 7) 00:08:56.716 16333.588 - 16434.412: 98.4062% ( 4) 00:08:56.716 16434.412 - 16535.237: 98.4453% ( 5) 00:08:56.716 16535.237 - 16636.062: 98.4844% ( 5) 00:08:56.716 16636.062 - 16736.886: 98.5000% ( 2) 00:08:56.716 17241.009 - 17341.834: 98.5234% ( 3) 00:08:56.716 17341.834 - 17442.658: 98.5859% ( 8) 00:08:56.716 17442.658 - 17543.483: 98.6562% ( 9) 00:08:56.716 17543.483 - 17644.308: 98.7109% ( 7) 00:08:56.716 17644.308 - 17745.132: 98.7266% ( 2) 00:08:56.716 17745.132 - 17845.957: 98.7578% ( 4) 00:08:56.716 17845.957 - 17946.782: 98.7969% ( 5) 00:08:56.716 17946.782 - 18047.606: 98.8359% ( 5) 00:08:56.716 18047.606 - 18148.431: 98.8906% ( 7) 00:08:56.716 18148.431 - 18249.255: 98.9297% ( 5) 00:08:56.716 18249.255 - 18350.080: 98.9609% ( 4) 00:08:56.716 18350.080 - 18450.905: 99.0000% ( 5) 00:08:56.716 26617.698 - 26819.348: 99.1016% ( 13) 00:08:56.716 26819.348 - 27020.997: 99.1797% ( 10) 00:08:56.716 27020.997 - 27222.646: 99.2422% ( 8) 00:08:56.716 27222.646 - 27424.295: 99.2812% ( 5) 00:08:56.716 27424.295 - 27625.945: 99.3516% ( 9) 00:08:56.716 27625.945 - 27827.594: 99.3672% ( 2) 00:08:56.716 27827.594 - 28029.243: 99.4062% ( 5) 00:08:56.716 28029.243 - 28230.892: 99.4688% ( 8) 00:08:56.716 28230.892 - 28432.542: 99.5000% ( 4) 00:08:56.716 33675.422 - 33877.071: 99.5234% ( 3) 00:08:56.716 33877.071 - 34078.720: 99.5625% ( 5) 00:08:56.716 34078.720 - 34280.369: 99.6094% ( 6) 00:08:56.716 34280.369 - 34482.018: 99.6641% ( 7) 00:08:56.716 34482.018 - 34683.668: 99.6953% ( 4) 00:08:56.716 34683.668 - 34885.317: 99.7500% ( 7) 00:08:56.716 34885.317 - 35086.966: 99.7969% ( 6) 00:08:56.716 35086.966 - 35288.615: 99.8438% ( 6) 00:08:56.716 35288.615 - 35490.265: 99.8906% ( 6) 00:08:56.716 35490.265 - 35691.914: 99.9375% ( 6) 00:08:56.716 35691.914 - 35893.563: 99.9844% ( 6) 00:08:56.716 35893.563 - 36095.212: 100.0000% ( 2) 00:08:56.716 00:08:56.716 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:56.716 ============================================================================== 00:08:56.716 Range in us Cumulative IO count 00:08:56.716 7813.908 - 7864.320: 0.0156% ( 2) 00:08:56.716 7864.320 - 7914.732: 0.0547% ( 5) 00:08:56.716 7914.732 - 7965.145: 0.1641% ( 14) 00:08:56.716 7965.145 - 8015.557: 0.3125% ( 19) 00:08:56.716 8015.557 - 8065.969: 0.5781% ( 34) 00:08:56.716 8065.969 - 8116.382: 0.8594% ( 36) 00:08:56.716 8116.382 - 8166.794: 1.1641% ( 39) 00:08:56.716 8166.794 - 8217.206: 1.7188% ( 71) 00:08:56.716 8217.206 - 8267.618: 2.3438% ( 80) 00:08:56.716 8267.618 - 8318.031: 3.1406% ( 102) 00:08:56.716 8318.031 - 8368.443: 3.8984% ( 97) 00:08:56.716 8368.443 - 8418.855: 4.9062% ( 129) 00:08:56.716 8418.855 - 8469.268: 5.9688% ( 136) 00:08:56.716 8469.268 - 8519.680: 7.1641% ( 153) 00:08:56.716 8519.680 - 8570.092: 8.2578% ( 140) 00:08:56.716 8570.092 - 8620.505: 9.4062% ( 147) 00:08:56.716 8620.505 - 8670.917: 10.6406% ( 158) 00:08:56.716 8670.917 - 8721.329: 11.9766% ( 171) 00:08:56.716 8721.329 - 8771.742: 13.4766% ( 192) 00:08:56.716 8771.742 - 8822.154: 14.9688% ( 191) 00:08:56.716 8822.154 - 8872.566: 16.4922% ( 195) 00:08:56.716 8872.566 - 8922.978: 17.9141% ( 182) 00:08:56.716 8922.978 - 8973.391: 19.6406% ( 221) 00:08:56.716 8973.391 - 9023.803: 21.7266% ( 267) 00:08:56.716 9023.803 - 9074.215: 23.7891% ( 264) 00:08:56.716 9074.215 - 9124.628: 26.3828% ( 332) 00:08:56.716 9124.628 - 9175.040: 29.1719% ( 357) 00:08:56.716 9175.040 - 9225.452: 32.1250% ( 378) 00:08:56.716 9225.452 - 9275.865: 35.1094% ( 382) 00:08:56.716 9275.865 - 9326.277: 38.3047% ( 409) 00:08:56.716 9326.277 - 9376.689: 41.2422% ( 376) 00:08:56.716 9376.689 - 9427.102: 44.2109% ( 380) 00:08:56.716 9427.102 - 9477.514: 47.4297% ( 412) 00:08:56.716 9477.514 - 9527.926: 50.4141% ( 382) 00:08:56.716 9527.926 - 9578.338: 53.3594% ( 377) 00:08:56.716 9578.338 - 9628.751: 56.1094% ( 352) 00:08:56.716 9628.751 - 9679.163: 58.9844% ( 368) 00:08:56.716 9679.163 - 9729.575: 61.4922% ( 321) 00:08:56.716 9729.575 - 9779.988: 63.8750% ( 305) 00:08:56.716 9779.988 - 9830.400: 66.4062% ( 324) 00:08:56.716 9830.400 - 9880.812: 68.3750% ( 252) 00:08:56.716 9880.812 - 9931.225: 70.0703% ( 217) 00:08:56.716 9931.225 - 9981.637: 71.5078% ( 184) 00:08:56.716 9981.637 - 10032.049: 72.7656% ( 161) 00:08:56.716 10032.049 - 10082.462: 74.1797% ( 181) 00:08:56.716 10082.462 - 10132.874: 75.2656% ( 139) 00:08:56.716 10132.874 - 10183.286: 76.2266% ( 123) 00:08:56.716 10183.286 - 10233.698: 76.9766% ( 96) 00:08:56.716 10233.698 - 10284.111: 77.7578% ( 100) 00:08:56.716 10284.111 - 10334.523: 78.4453% ( 88) 00:08:56.716 10334.523 - 10384.935: 79.0547% ( 78) 00:08:56.716 10384.935 - 10435.348: 79.6328% ( 74) 00:08:56.716 10435.348 - 10485.760: 80.1953% ( 72) 00:08:56.716 10485.760 - 10536.172: 80.7734% ( 74) 00:08:56.716 10536.172 - 10586.585: 81.3516% ( 74) 00:08:56.716 10586.585 - 10636.997: 81.8516% ( 64) 00:08:56.716 10636.997 - 10687.409: 82.3906% ( 69) 00:08:56.716 10687.409 - 10737.822: 82.8984% ( 65) 00:08:56.716 10737.822 - 10788.234: 83.3984% ( 64) 00:08:56.716 10788.234 - 10838.646: 83.7891% ( 50) 00:08:56.716 10838.646 - 10889.058: 84.2734% ( 62) 00:08:56.716 10889.058 - 10939.471: 84.8281% ( 71) 00:08:56.716 10939.471 - 10989.883: 85.3203% ( 63) 00:08:56.716 10989.883 - 11040.295: 85.8203% ( 64) 00:08:56.716 11040.295 - 11090.708: 86.3516% ( 68) 00:08:56.716 11090.708 - 11141.120: 86.8203% ( 60) 00:08:56.716 11141.120 - 11191.532: 87.1484% ( 42) 00:08:56.716 11191.532 - 11241.945: 87.5234% ( 48) 00:08:56.716 11241.945 - 11292.357: 87.9375% ( 53) 00:08:56.716 11292.357 - 11342.769: 88.2656% ( 42) 00:08:56.716 11342.769 - 11393.182: 88.6484% ( 49) 00:08:56.716 11393.182 - 11443.594: 89.0703% ( 54) 00:08:56.716 11443.594 - 11494.006: 89.5625% ( 63) 00:08:56.716 11494.006 - 11544.418: 89.9609% ( 51) 00:08:56.716 11544.418 - 11594.831: 90.4141% ( 58) 00:08:56.716 11594.831 - 11645.243: 90.8438% ( 55) 00:08:56.716 11645.243 - 11695.655: 91.2812% ( 56) 00:08:56.716 11695.655 - 11746.068: 91.6641% ( 49) 00:08:56.716 11746.068 - 11796.480: 92.0781% ( 53) 00:08:56.716 11796.480 - 11846.892: 92.5469% ( 60) 00:08:56.716 11846.892 - 11897.305: 92.9844% ( 56) 00:08:56.716 11897.305 - 11947.717: 93.3828% ( 51) 00:08:56.716 11947.717 - 11998.129: 93.7188% ( 43) 00:08:56.716 11998.129 - 12048.542: 94.0156% ( 38) 00:08:56.716 12048.542 - 12098.954: 94.2969% ( 36) 00:08:56.716 12098.954 - 12149.366: 94.5391% ( 31) 00:08:56.716 12149.366 - 12199.778: 94.7500% ( 27) 00:08:56.716 12199.778 - 12250.191: 94.9688% ( 28) 00:08:56.716 12250.191 - 12300.603: 95.1719% ( 26) 00:08:56.716 12300.603 - 12351.015: 95.3516% ( 23) 00:08:56.716 12351.015 - 12401.428: 95.4688% ( 15) 00:08:56.716 12401.428 - 12451.840: 95.5547% ( 11) 00:08:56.716 12451.840 - 12502.252: 95.6953% ( 18) 00:08:56.716 12502.252 - 12552.665: 95.7422% ( 6) 00:08:56.716 12552.665 - 12603.077: 95.7969% ( 7) 00:08:56.716 12603.077 - 12653.489: 95.8516% ( 7) 00:08:56.716 12653.489 - 12703.902: 95.9297% ( 10) 00:08:56.716 12703.902 - 12754.314: 95.9922% ( 8) 00:08:56.716 12754.314 - 12804.726: 96.0781% ( 11) 00:08:56.716 12804.726 - 12855.138: 96.1484% ( 9) 00:08:56.716 12855.138 - 12905.551: 96.2188% ( 9) 00:08:56.716 12905.551 - 13006.375: 96.3594% ( 18) 00:08:56.716 13006.375 - 13107.200: 96.4297% ( 9) 00:08:56.716 13107.200 - 13208.025: 96.5078% ( 10) 00:08:56.716 13208.025 - 13308.849: 96.6328% ( 16) 00:08:56.716 13308.849 - 13409.674: 96.8125% ( 23) 00:08:56.716 13409.674 - 13510.498: 97.0156% ( 26) 00:08:56.716 13510.498 - 13611.323: 97.1328% ( 15) 00:08:56.716 13611.323 - 13712.148: 97.2734% ( 18) 00:08:56.716 13712.148 - 13812.972: 97.4062% ( 17) 00:08:56.716 13812.972 - 13913.797: 97.5234% ( 15) 00:08:56.716 13913.797 - 14014.622: 97.6406% ( 15) 00:08:56.716 14014.622 - 14115.446: 97.7031% ( 8) 00:08:56.716 14115.446 - 14216.271: 97.7578% ( 7) 00:08:56.716 14216.271 - 14317.095: 97.8047% ( 6) 00:08:56.716 14317.095 - 14417.920: 97.8516% ( 6) 00:08:56.716 14417.920 - 14518.745: 97.9062% ( 7) 00:08:56.716 14518.745 - 14619.569: 97.9531% ( 6) 00:08:56.716 14619.569 - 14720.394: 98.0000% ( 6) 00:08:56.716 14821.218 - 14922.043: 98.0078% ( 1) 00:08:56.716 14922.043 - 15022.868: 98.0547% ( 6) 00:08:56.716 15022.868 - 15123.692: 98.0938% ( 5) 00:08:56.716 15123.692 - 15224.517: 98.1406% ( 6) 00:08:56.716 15224.517 - 15325.342: 98.1875% ( 6) 00:08:56.716 15325.342 - 15426.166: 98.2266% ( 5) 00:08:56.716 15426.166 - 15526.991: 98.2656% ( 5) 00:08:56.716 15526.991 - 15627.815: 98.3125% ( 6) 00:08:56.716 15627.815 - 15728.640: 98.3516% ( 5) 00:08:56.717 15728.640 - 15829.465: 98.3906% ( 5) 00:08:56.717 15829.465 - 15930.289: 98.4375% ( 6) 00:08:56.717 15930.289 - 16031.114: 98.4844% ( 6) 00:08:56.717 16031.114 - 16131.938: 98.5000% ( 2) 00:08:56.717 17745.132 - 17845.957: 98.5312% ( 4) 00:08:56.717 17845.957 - 17946.782: 98.5781% ( 6) 00:08:56.717 17946.782 - 18047.606: 98.6250% ( 6) 00:08:56.717 18047.606 - 18148.431: 98.6719% ( 6) 00:08:56.717 18148.431 - 18249.255: 98.7266% ( 7) 00:08:56.717 18249.255 - 18350.080: 98.7734% ( 6) 00:08:56.717 18350.080 - 18450.905: 98.8203% ( 6) 00:08:56.717 18450.905 - 18551.729: 98.8672% ( 6) 00:08:56.717 18551.729 - 18652.554: 98.9141% ( 6) 00:08:56.717 18652.554 - 18753.378: 98.9609% ( 6) 00:08:56.717 18753.378 - 18854.203: 99.0000% ( 5) 00:08:56.717 25710.277 - 25811.102: 99.0078% ( 1) 00:08:56.717 26012.751 - 26214.400: 99.1094% ( 13) 00:08:56.717 26214.400 - 26416.049: 99.2109% ( 13) 00:08:56.717 26416.049 - 26617.698: 99.2578% ( 6) 00:08:56.717 26617.698 - 26819.348: 99.3047% ( 6) 00:08:56.717 26819.348 - 27020.997: 99.3516% ( 6) 00:08:56.717 27020.997 - 27222.646: 99.3984% ( 6) 00:08:56.717 27222.646 - 27424.295: 99.4453% ( 6) 00:08:56.717 27424.295 - 27625.945: 99.4922% ( 6) 00:08:56.717 27625.945 - 27827.594: 99.5000% ( 1) 00:08:56.717 31053.982 - 31255.631: 99.5703% ( 9) 00:08:56.717 31255.631 - 31457.280: 99.6406% ( 9) 00:08:56.717 32263.877 - 32465.526: 99.6641% ( 3) 00:08:56.717 32465.526 - 32667.175: 99.7109% ( 6) 00:08:56.717 32667.175 - 32868.825: 99.7656% ( 7) 00:08:56.717 32868.825 - 33070.474: 99.8125% ( 6) 00:08:56.717 33070.474 - 33272.123: 99.8594% ( 6) 00:08:56.717 33272.123 - 33473.772: 99.9141% ( 7) 00:08:56.717 33473.772 - 33675.422: 99.9609% ( 6) 00:08:56.717 33675.422 - 33877.071: 100.0000% ( 5) 00:08:56.717 00:08:56.717 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:56.717 ============================================================================== 00:08:56.717 Range in us Cumulative IO count 00:08:56.717 7763.495 - 7813.908: 0.0312% ( 4) 00:08:56.717 7813.908 - 7864.320: 0.0703% ( 5) 00:08:56.717 7864.320 - 7914.732: 0.1328% ( 8) 00:08:56.717 7914.732 - 7965.145: 0.2188% ( 11) 00:08:56.717 7965.145 - 8015.557: 0.3828% ( 21) 00:08:56.717 8015.557 - 8065.969: 0.5703% ( 24) 00:08:56.717 8065.969 - 8116.382: 0.7734% ( 26) 00:08:56.717 8116.382 - 8166.794: 1.1641% ( 50) 00:08:56.717 8166.794 - 8217.206: 1.6094% ( 57) 00:08:56.717 8217.206 - 8267.618: 2.2344% ( 80) 00:08:56.717 8267.618 - 8318.031: 3.0078% ( 99) 00:08:56.717 8318.031 - 8368.443: 3.7109% ( 90) 00:08:56.717 8368.443 - 8418.855: 4.7422% ( 132) 00:08:56.717 8418.855 - 8469.268: 5.6562% ( 117) 00:08:56.717 8469.268 - 8519.680: 6.7812% ( 144) 00:08:56.717 8519.680 - 8570.092: 7.9062% ( 144) 00:08:56.717 8570.092 - 8620.505: 9.2188% ( 168) 00:08:56.717 8620.505 - 8670.917: 10.4688% ( 160) 00:08:56.717 8670.917 - 8721.329: 11.8047% ( 171) 00:08:56.717 8721.329 - 8771.742: 13.3750% ( 201) 00:08:56.717 8771.742 - 8822.154: 15.1719% ( 230) 00:08:56.717 8822.154 - 8872.566: 17.0391% ( 239) 00:08:56.717 8872.566 - 8922.978: 18.8438% ( 231) 00:08:56.717 8922.978 - 8973.391: 21.1172% ( 291) 00:08:56.717 8973.391 - 9023.803: 23.4141% ( 294) 00:08:56.717 9023.803 - 9074.215: 25.7969% ( 305) 00:08:56.717 9074.215 - 9124.628: 28.1875% ( 306) 00:08:56.717 9124.628 - 9175.040: 30.8828% ( 345) 00:08:56.717 9175.040 - 9225.452: 33.6250% ( 351) 00:08:56.717 9225.452 - 9275.865: 36.4922% ( 367) 00:08:56.717 9275.865 - 9326.277: 39.6016% ( 398) 00:08:56.717 9326.277 - 9376.689: 42.5078% ( 372) 00:08:56.717 9376.689 - 9427.102: 45.3516% ( 364) 00:08:56.717 9427.102 - 9477.514: 48.1172% ( 354) 00:08:56.717 9477.514 - 9527.926: 50.5859% ( 316) 00:08:56.717 9527.926 - 9578.338: 53.1172% ( 324) 00:08:56.717 9578.338 - 9628.751: 55.7266% ( 334) 00:08:56.717 9628.751 - 9679.163: 58.3516% ( 336) 00:08:56.717 9679.163 - 9729.575: 60.6875% ( 299) 00:08:56.717 9729.575 - 9779.988: 62.8438% ( 276) 00:08:56.717 9779.988 - 9830.400: 64.6797% ( 235) 00:08:56.717 9830.400 - 9880.812: 66.4141% ( 222) 00:08:56.717 9880.812 - 9931.225: 68.0312% ( 207) 00:08:56.717 9931.225 - 9981.637: 69.6250% ( 204) 00:08:56.717 9981.637 - 10032.049: 71.2422% ( 207) 00:08:56.717 10032.049 - 10082.462: 72.6016% ( 174) 00:08:56.717 10082.462 - 10132.874: 73.9297% ( 170) 00:08:56.717 10132.874 - 10183.286: 75.0625% ( 145) 00:08:56.717 10183.286 - 10233.698: 76.1250% ( 136) 00:08:56.717 10233.698 - 10284.111: 77.1719% ( 134) 00:08:56.717 10284.111 - 10334.523: 78.1484% ( 125) 00:08:56.717 10334.523 - 10384.935: 79.1562% ( 129) 00:08:56.717 10384.935 - 10435.348: 79.9688% ( 104) 00:08:56.717 10435.348 - 10485.760: 80.6328% ( 85) 00:08:56.717 10485.760 - 10536.172: 81.2891% ( 84) 00:08:56.717 10536.172 - 10586.585: 81.8359% ( 70) 00:08:56.717 10586.585 - 10636.997: 82.3203% ( 62) 00:08:56.717 10636.997 - 10687.409: 82.7656% ( 57) 00:08:56.717 10687.409 - 10737.822: 83.3203% ( 71) 00:08:56.717 10737.822 - 10788.234: 83.7656% ( 57) 00:08:56.717 10788.234 - 10838.646: 84.2812% ( 66) 00:08:56.717 10838.646 - 10889.058: 84.7422% ( 59) 00:08:56.717 10889.058 - 10939.471: 85.1719% ( 55) 00:08:56.717 10939.471 - 10989.883: 85.6016% ( 55) 00:08:56.717 10989.883 - 11040.295: 86.0078% ( 52) 00:08:56.717 11040.295 - 11090.708: 86.4688% ( 59) 00:08:56.717 11090.708 - 11141.120: 86.8359% ( 47) 00:08:56.717 11141.120 - 11191.532: 87.2344% ( 51) 00:08:56.717 11191.532 - 11241.945: 87.7266% ( 63) 00:08:56.717 11241.945 - 11292.357: 88.2266% ( 64) 00:08:56.717 11292.357 - 11342.769: 88.7266% ( 64) 00:08:56.717 11342.769 - 11393.182: 89.1875% ( 59) 00:08:56.717 11393.182 - 11443.594: 89.6719% ( 62) 00:08:56.717 11443.594 - 11494.006: 90.0391% ( 47) 00:08:56.717 11494.006 - 11544.418: 90.4453% ( 52) 00:08:56.717 11544.418 - 11594.831: 90.8828% ( 56) 00:08:56.717 11594.831 - 11645.243: 91.3359% ( 58) 00:08:56.717 11645.243 - 11695.655: 91.7344% ( 51) 00:08:56.717 11695.655 - 11746.068: 92.3047% ( 73) 00:08:56.717 11746.068 - 11796.480: 92.6797% ( 48) 00:08:56.717 11796.480 - 11846.892: 93.0234% ( 44) 00:08:56.717 11846.892 - 11897.305: 93.3203% ( 38) 00:08:56.717 11897.305 - 11947.717: 93.6484% ( 42) 00:08:56.717 11947.717 - 11998.129: 93.9844% ( 43) 00:08:56.717 11998.129 - 12048.542: 94.2344% ( 32) 00:08:56.717 12048.542 - 12098.954: 94.4141% ( 23) 00:08:56.717 12098.954 - 12149.366: 94.6641% ( 32) 00:08:56.717 12149.366 - 12199.778: 94.8125% ( 19) 00:08:56.717 12199.778 - 12250.191: 94.9609% ( 19) 00:08:56.717 12250.191 - 12300.603: 95.0938% ( 17) 00:08:56.717 12300.603 - 12351.015: 95.2109% ( 15) 00:08:56.717 12351.015 - 12401.428: 95.2969% ( 11) 00:08:56.717 12401.428 - 12451.840: 95.3672% ( 9) 00:08:56.717 12451.840 - 12502.252: 95.4844% ( 15) 00:08:56.717 12502.252 - 12552.665: 95.5391% ( 7) 00:08:56.717 12552.665 - 12603.077: 95.6094% ( 9) 00:08:56.717 12603.077 - 12653.489: 95.6719% ( 8) 00:08:56.717 12653.489 - 12703.902: 95.7031% ( 4) 00:08:56.717 12703.902 - 12754.314: 95.7344% ( 4) 00:08:56.717 12754.314 - 12804.726: 95.7656% ( 4) 00:08:56.717 12804.726 - 12855.138: 95.8438% ( 10) 00:08:56.717 12855.138 - 12905.551: 95.9375% ( 12) 00:08:56.717 12905.551 - 13006.375: 96.1016% ( 21) 00:08:56.717 13006.375 - 13107.200: 96.2891% ( 24) 00:08:56.717 13107.200 - 13208.025: 96.5703% ( 36) 00:08:56.717 13208.025 - 13308.849: 96.7812% ( 27) 00:08:56.717 13308.849 - 13409.674: 96.9297% ( 19) 00:08:56.717 13409.674 - 13510.498: 97.0625% ( 17) 00:08:56.717 13510.498 - 13611.323: 97.1641% ( 13) 00:08:56.717 13611.323 - 13712.148: 97.2578% ( 12) 00:08:56.717 13712.148 - 13812.972: 97.3359% ( 10) 00:08:56.717 13812.972 - 13913.797: 97.4922% ( 20) 00:08:56.717 13913.797 - 14014.622: 97.5859% ( 12) 00:08:56.717 14014.622 - 14115.446: 97.6406% ( 7) 00:08:56.717 14115.446 - 14216.271: 97.6875% ( 6) 00:08:56.717 14216.271 - 14317.095: 97.7266% ( 5) 00:08:56.717 14317.095 - 14417.920: 97.7734% ( 6) 00:08:56.717 14417.920 - 14518.745: 97.8203% ( 6) 00:08:56.717 14518.745 - 14619.569: 97.8594% ( 5) 00:08:56.717 14619.569 - 14720.394: 97.9531% ( 12) 00:08:56.717 14720.394 - 14821.218: 98.0703% ( 15) 00:08:56.717 14821.218 - 14922.043: 98.1875% ( 15) 00:08:56.717 14922.043 - 15022.868: 98.3438% ( 20) 00:08:56.717 15022.868 - 15123.692: 98.3984% ( 7) 00:08:56.717 15123.692 - 15224.517: 98.4375% ( 5) 00:08:56.717 15224.517 - 15325.342: 98.4844% ( 6) 00:08:56.717 15325.342 - 15426.166: 98.5000% ( 2) 00:08:56.717 17845.957 - 17946.782: 98.5547% ( 7) 00:08:56.717 17946.782 - 18047.606: 98.6016% ( 6) 00:08:56.717 18047.606 - 18148.431: 98.6484% ( 6) 00:08:56.717 18148.431 - 18249.255: 98.6953% ( 6) 00:08:56.717 18249.255 - 18350.080: 98.7422% ( 6) 00:08:56.717 18350.080 - 18450.905: 98.7891% ( 6) 00:08:56.717 18450.905 - 18551.729: 98.8359% ( 6) 00:08:56.717 18551.729 - 18652.554: 98.8828% ( 6) 00:08:56.717 18652.554 - 18753.378: 98.9297% ( 6) 00:08:56.717 18753.378 - 18854.203: 98.9766% ( 6) 00:08:56.717 18854.203 - 18955.028: 99.0000% ( 3) 00:08:56.717 24601.206 - 24702.031: 99.0078% ( 1) 00:08:56.717 24702.031 - 24802.855: 99.1016% ( 12) 00:08:56.717 24802.855 - 24903.680: 99.1562% ( 7) 00:08:56.717 24903.680 - 25004.505: 99.2188% ( 8) 00:08:56.717 25004.505 - 25105.329: 99.2422% ( 3) 00:08:56.717 25105.329 - 25206.154: 99.2656% ( 3) 00:08:56.717 25206.154 - 25306.978: 99.2891% ( 3) 00:08:56.717 25306.978 - 25407.803: 99.3125% ( 3) 00:08:56.717 25407.803 - 25508.628: 99.3359% ( 3) 00:08:56.717 25508.628 - 25609.452: 99.3594% ( 3) 00:08:56.717 25609.452 - 25710.277: 99.3828% ( 3) 00:08:56.717 25710.277 - 25811.102: 99.4062% ( 3) 00:08:56.718 25811.102 - 26012.751: 99.4531% ( 6) 00:08:56.718 26012.751 - 26214.400: 99.4922% ( 5) 00:08:56.718 26214.400 - 26416.049: 99.5000% ( 1) 00:08:56.718 29440.788 - 29642.437: 99.5469% ( 6) 00:08:56.718 29642.437 - 29844.086: 99.5625% ( 2) 00:08:56.718 30852.332 - 31053.982: 99.6016% ( 5) 00:08:56.718 31053.982 - 31255.631: 99.6406% ( 5) 00:08:56.718 31255.631 - 31457.280: 99.6875% ( 6) 00:08:56.718 31457.280 - 31658.929: 99.7422% ( 7) 00:08:56.718 31658.929 - 31860.578: 99.7969% ( 7) 00:08:56.718 31860.578 - 32062.228: 99.8438% ( 6) 00:08:56.718 32062.228 - 32263.877: 99.8984% ( 7) 00:08:56.718 32263.877 - 32465.526: 99.9531% ( 7) 00:08:56.718 32465.526 - 32667.175: 100.0000% ( 6) 00:08:56.718 00:08:56.718 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:56.718 ============================================================================== 00:08:56.718 Range in us Cumulative IO count 00:08:56.718 7713.083 - 7763.495: 0.0078% ( 1) 00:08:56.718 7763.495 - 7813.908: 0.0156% ( 1) 00:08:56.718 7864.320 - 7914.732: 0.0547% ( 5) 00:08:56.718 7914.732 - 7965.145: 0.1328% ( 10) 00:08:56.718 7965.145 - 8015.557: 0.2422% ( 14) 00:08:56.718 8015.557 - 8065.969: 0.4453% ( 26) 00:08:56.718 8065.969 - 8116.382: 0.7109% ( 34) 00:08:56.718 8116.382 - 8166.794: 1.0938% ( 49) 00:08:56.718 8166.794 - 8217.206: 1.5781% ( 62) 00:08:56.718 8217.206 - 8267.618: 2.2188% ( 82) 00:08:56.718 8267.618 - 8318.031: 2.8203% ( 77) 00:08:56.718 8318.031 - 8368.443: 3.6562% ( 107) 00:08:56.718 8368.443 - 8418.855: 4.5625% ( 116) 00:08:56.718 8418.855 - 8469.268: 5.4688% ( 116) 00:08:56.718 8469.268 - 8519.680: 6.6484% ( 151) 00:08:56.718 8519.680 - 8570.092: 7.8438% ( 153) 00:08:56.718 8570.092 - 8620.505: 9.2031% ( 174) 00:08:56.718 8620.505 - 8670.917: 10.5859% ( 177) 00:08:56.718 8670.917 - 8721.329: 11.9375% ( 173) 00:08:56.718 8721.329 - 8771.742: 13.4922% ( 199) 00:08:56.718 8771.742 - 8822.154: 14.9375% ( 185) 00:08:56.718 8822.154 - 8872.566: 16.3828% ( 185) 00:08:56.718 8872.566 - 8922.978: 18.1641% ( 228) 00:08:56.718 8922.978 - 8973.391: 20.0000% ( 235) 00:08:56.718 8973.391 - 9023.803: 22.1172% ( 271) 00:08:56.718 9023.803 - 9074.215: 24.5391% ( 310) 00:08:56.718 9074.215 - 9124.628: 27.0234% ( 318) 00:08:56.718 9124.628 - 9175.040: 29.6641% ( 338) 00:08:56.718 9175.040 - 9225.452: 32.7109% ( 390) 00:08:56.718 9225.452 - 9275.865: 35.8672% ( 404) 00:08:56.718 9275.865 - 9326.277: 39.3672% ( 448) 00:08:56.718 9326.277 - 9376.689: 42.6250% ( 417) 00:08:56.718 9376.689 - 9427.102: 45.6172% ( 383) 00:08:56.718 9427.102 - 9477.514: 48.2109% ( 332) 00:08:56.718 9477.514 - 9527.926: 50.8672% ( 340) 00:08:56.718 9527.926 - 9578.338: 53.6406% ( 355) 00:08:56.718 9578.338 - 9628.751: 56.3125% ( 342) 00:08:56.718 9628.751 - 9679.163: 58.8594% ( 326) 00:08:56.718 9679.163 - 9729.575: 61.1484% ( 293) 00:08:56.718 9729.575 - 9779.988: 63.1953% ( 262) 00:08:56.718 9779.988 - 9830.400: 64.8047% ( 206) 00:08:56.718 9830.400 - 9880.812: 66.9766% ( 278) 00:08:56.718 9880.812 - 9931.225: 68.5703% ( 204) 00:08:56.718 9931.225 - 9981.637: 69.9766% ( 180) 00:08:56.718 9981.637 - 10032.049: 71.3125% ( 171) 00:08:56.718 10032.049 - 10082.462: 72.6875% ( 176) 00:08:56.718 10082.462 - 10132.874: 73.8125% ( 144) 00:08:56.718 10132.874 - 10183.286: 75.0234% ( 155) 00:08:56.718 10183.286 - 10233.698: 76.1562% ( 145) 00:08:56.718 10233.698 - 10284.111: 77.2188% ( 136) 00:08:56.718 10284.111 - 10334.523: 78.1641% ( 121) 00:08:56.718 10334.523 - 10384.935: 79.1016% ( 120) 00:08:56.718 10384.935 - 10435.348: 80.1016% ( 128) 00:08:56.718 10435.348 - 10485.760: 80.9688% ( 111) 00:08:56.718 10485.760 - 10536.172: 81.5547% ( 75) 00:08:56.718 10536.172 - 10586.585: 82.0547% ( 64) 00:08:56.718 10586.585 - 10636.997: 82.4766% ( 54) 00:08:56.718 10636.997 - 10687.409: 82.9297% ( 58) 00:08:56.718 10687.409 - 10737.822: 83.3594% ( 55) 00:08:56.718 10737.822 - 10788.234: 83.8281% ( 60) 00:08:56.718 10788.234 - 10838.646: 84.2031% ( 48) 00:08:56.718 10838.646 - 10889.058: 84.5234% ( 41) 00:08:56.718 10889.058 - 10939.471: 84.9219% ( 51) 00:08:56.718 10939.471 - 10989.883: 85.3047% ( 49) 00:08:56.718 10989.883 - 11040.295: 85.6484% ( 44) 00:08:56.718 11040.295 - 11090.708: 85.9844% ( 43) 00:08:56.718 11090.708 - 11141.120: 86.4609% ( 61) 00:08:56.718 11141.120 - 11191.532: 86.9531% ( 63) 00:08:56.718 11191.532 - 11241.945: 87.4922% ( 69) 00:08:56.718 11241.945 - 11292.357: 88.0625% ( 73) 00:08:56.718 11292.357 - 11342.769: 88.5312% ( 60) 00:08:56.718 11342.769 - 11393.182: 89.0391% ( 65) 00:08:56.718 11393.182 - 11443.594: 89.7734% ( 94) 00:08:56.718 11443.594 - 11494.006: 90.3594% ( 75) 00:08:56.718 11494.006 - 11544.418: 90.9922% ( 81) 00:08:56.718 11544.418 - 11594.831: 91.4531% ( 59) 00:08:56.718 11594.831 - 11645.243: 91.8516% ( 51) 00:08:56.718 11645.243 - 11695.655: 92.3047% ( 58) 00:08:56.718 11695.655 - 11746.068: 92.7266% ( 54) 00:08:56.718 11746.068 - 11796.480: 93.0234% ( 38) 00:08:56.718 11796.480 - 11846.892: 93.3984% ( 48) 00:08:56.718 11846.892 - 11897.305: 93.6641% ( 34) 00:08:56.718 11897.305 - 11947.717: 93.8672% ( 26) 00:08:56.718 11947.717 - 11998.129: 94.0625% ( 25) 00:08:56.718 11998.129 - 12048.542: 94.2422% ( 23) 00:08:56.718 12048.542 - 12098.954: 94.3984% ( 20) 00:08:56.718 12098.954 - 12149.366: 94.5781% ( 23) 00:08:56.718 12149.366 - 12199.778: 94.7500% ( 22) 00:08:56.718 12199.778 - 12250.191: 94.9609% ( 27) 00:08:56.718 12250.191 - 12300.603: 95.1328% ( 22) 00:08:56.718 12300.603 - 12351.015: 95.2734% ( 18) 00:08:56.718 12351.015 - 12401.428: 95.3906% ( 15) 00:08:56.718 12401.428 - 12451.840: 95.4609% ( 9) 00:08:56.718 12451.840 - 12502.252: 95.5625% ( 13) 00:08:56.718 12502.252 - 12552.665: 95.6406% ( 10) 00:08:56.718 12552.665 - 12603.077: 95.7188% ( 10) 00:08:56.718 12603.077 - 12653.489: 95.7969% ( 10) 00:08:56.718 12653.489 - 12703.902: 95.8672% ( 9) 00:08:56.718 12703.902 - 12754.314: 95.9531% ( 11) 00:08:56.718 12754.314 - 12804.726: 96.0781% ( 16) 00:08:56.718 12804.726 - 12855.138: 96.1875% ( 14) 00:08:56.718 12855.138 - 12905.551: 96.2734% ( 11) 00:08:56.718 12905.551 - 13006.375: 96.4297% ( 20) 00:08:56.718 13006.375 - 13107.200: 96.5781% ( 19) 00:08:56.718 13107.200 - 13208.025: 96.7500% ( 22) 00:08:56.718 13208.025 - 13308.849: 96.8906% ( 18) 00:08:56.718 13308.849 - 13409.674: 97.0625% ( 22) 00:08:56.718 13409.674 - 13510.498: 97.2266% ( 21) 00:08:56.718 13510.498 - 13611.323: 97.3516% ( 16) 00:08:56.718 13611.323 - 13712.148: 97.4609% ( 14) 00:08:56.718 13712.148 - 13812.972: 97.5625% ( 13) 00:08:56.718 13812.972 - 13913.797: 97.7422% ( 23) 00:08:56.718 13913.797 - 14014.622: 97.8438% ( 13) 00:08:56.718 14014.622 - 14115.446: 97.9375% ( 12) 00:08:56.718 14115.446 - 14216.271: 98.0000% ( 8) 00:08:56.718 15426.166 - 15526.991: 98.0078% ( 1) 00:08:56.718 15526.991 - 15627.815: 98.0469% ( 5) 00:08:56.718 15627.815 - 15728.640: 98.1094% ( 8) 00:08:56.718 15728.640 - 15829.465: 98.2109% ( 13) 00:08:56.718 15829.465 - 15930.289: 98.3281% ( 15) 00:08:56.718 15930.289 - 16031.114: 98.4062% ( 10) 00:08:56.718 16031.114 - 16131.938: 98.4453% ( 5) 00:08:56.718 16131.938 - 16232.763: 98.4922% ( 6) 00:08:56.718 16232.763 - 16333.588: 98.5000% ( 1) 00:08:56.718 17543.483 - 17644.308: 98.5391% ( 5) 00:08:56.718 17644.308 - 17745.132: 98.5859% ( 6) 00:08:56.718 17745.132 - 17845.957: 98.6328% ( 6) 00:08:56.718 17845.957 - 17946.782: 98.6797% ( 6) 00:08:56.718 17946.782 - 18047.606: 98.7266% ( 6) 00:08:56.718 18047.606 - 18148.431: 98.7734% ( 6) 00:08:56.718 18148.431 - 18249.255: 98.8203% ( 6) 00:08:56.718 18249.255 - 18350.080: 98.8672% ( 6) 00:08:56.718 18350.080 - 18450.905: 98.9141% ( 6) 00:08:56.718 18450.905 - 18551.729: 98.9609% ( 6) 00:08:56.718 18551.729 - 18652.554: 99.0000% ( 5) 00:08:56.718 22887.188 - 22988.012: 99.0156% ( 2) 00:08:56.718 22988.012 - 23088.837: 99.0547% ( 5) 00:08:56.718 23088.837 - 23189.662: 99.1016% ( 6) 00:08:56.718 23189.662 - 23290.486: 99.1328% ( 4) 00:08:56.718 23290.486 - 23391.311: 99.1797% ( 6) 00:08:56.718 23391.311 - 23492.135: 99.2031% ( 3) 00:08:56.718 23492.135 - 23592.960: 99.2188% ( 2) 00:08:56.718 23592.960 - 23693.785: 99.2422% ( 3) 00:08:56.718 23693.785 - 23794.609: 99.2656% ( 3) 00:08:56.718 23794.609 - 23895.434: 99.2891% ( 3) 00:08:56.718 23895.434 - 23996.258: 99.3125% ( 3) 00:08:56.718 23996.258 - 24097.083: 99.3281% ( 2) 00:08:56.718 24097.083 - 24197.908: 99.3438% ( 2) 00:08:56.718 24197.908 - 24298.732: 99.3516% ( 1) 00:08:56.718 24298.732 - 24399.557: 99.3750% ( 3) 00:08:56.718 24399.557 - 24500.382: 99.3984% ( 3) 00:08:56.718 24500.382 - 24601.206: 99.4219% ( 3) 00:08:56.718 24601.206 - 24702.031: 99.4453% ( 3) 00:08:56.718 24702.031 - 24802.855: 99.4766% ( 4) 00:08:56.718 24802.855 - 24903.680: 99.5000% ( 3) 00:08:56.718 28029.243 - 28230.892: 99.5391% ( 5) 00:08:56.718 28230.892 - 28432.542: 99.6562% ( 15) 00:08:56.718 29037.489 - 29239.138: 99.6719% ( 2) 00:08:56.718 29239.138 - 29440.788: 99.6875% ( 2) 00:08:56.718 29440.788 - 29642.437: 99.7031% ( 2) 00:08:56.718 29642.437 - 29844.086: 99.7891% ( 11) 00:08:56.718 29844.086 - 30045.735: 99.8516% ( 8) 00:08:56.718 30045.735 - 30247.385: 99.9062% ( 7) 00:08:56.718 30247.385 - 30449.034: 99.9531% ( 6) 00:08:56.718 30449.034 - 30650.683: 100.0000% ( 6) 00:08:56.718 00:08:56.718 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:56.718 ============================================================================== 00:08:56.718 Range in us Cumulative IO count 00:08:56.718 7713.083 - 7763.495: 0.0156% ( 2) 00:08:56.718 7813.908 - 7864.320: 0.0547% ( 5) 00:08:56.718 7864.320 - 7914.732: 0.1016% ( 6) 00:08:56.718 7914.732 - 7965.145: 0.2109% ( 14) 00:08:56.719 7965.145 - 8015.557: 0.4062% ( 25) 00:08:56.719 8015.557 - 8065.969: 0.6875% ( 36) 00:08:56.719 8065.969 - 8116.382: 1.0781% ( 50) 00:08:56.719 8116.382 - 8166.794: 1.4766% ( 51) 00:08:56.719 8166.794 - 8217.206: 1.9453% ( 60) 00:08:56.719 8217.206 - 8267.618: 2.4453% ( 64) 00:08:56.719 8267.618 - 8318.031: 3.1406% ( 89) 00:08:56.719 8318.031 - 8368.443: 3.9375% ( 102) 00:08:56.719 8368.443 - 8418.855: 4.5859% ( 83) 00:08:56.719 8418.855 - 8469.268: 5.3203% ( 94) 00:08:56.719 8469.268 - 8519.680: 6.4219% ( 141) 00:08:56.719 8519.680 - 8570.092: 7.6250% ( 154) 00:08:56.719 8570.092 - 8620.505: 8.9922% ( 175) 00:08:56.719 8620.505 - 8670.917: 10.1875% ( 153) 00:08:56.719 8670.917 - 8721.329: 11.3047% ( 143) 00:08:56.719 8721.329 - 8771.742: 12.7812% ( 189) 00:08:56.719 8771.742 - 8822.154: 14.3203% ( 197) 00:08:56.719 8822.154 - 8872.566: 15.8828% ( 200) 00:08:56.719 8872.566 - 8922.978: 17.6875% ( 231) 00:08:56.719 8922.978 - 8973.391: 19.6797% ( 255) 00:08:56.719 8973.391 - 9023.803: 21.9844% ( 295) 00:08:56.719 9023.803 - 9074.215: 24.5156% ( 324) 00:08:56.719 9074.215 - 9124.628: 27.1953% ( 343) 00:08:56.719 9124.628 - 9175.040: 30.1641% ( 380) 00:08:56.719 9175.040 - 9225.452: 33.0156% ( 365) 00:08:56.719 9225.452 - 9275.865: 36.3516% ( 427) 00:08:56.719 9275.865 - 9326.277: 39.5000% ( 403) 00:08:56.719 9326.277 - 9376.689: 42.6797% ( 407) 00:08:56.719 9376.689 - 9427.102: 45.5000% ( 361) 00:08:56.719 9427.102 - 9477.514: 48.5625% ( 392) 00:08:56.719 9477.514 - 9527.926: 51.2969% ( 350) 00:08:56.719 9527.926 - 9578.338: 54.1016% ( 359) 00:08:56.719 9578.338 - 9628.751: 56.6719% ( 329) 00:08:56.719 9628.751 - 9679.163: 59.1406% ( 316) 00:08:56.719 9679.163 - 9729.575: 61.2344% ( 268) 00:08:56.719 9729.575 - 9779.988: 63.1250% ( 242) 00:08:56.719 9779.988 - 9830.400: 65.0391% ( 245) 00:08:56.719 9830.400 - 9880.812: 66.4844% ( 185) 00:08:56.719 9880.812 - 9931.225: 68.1172% ( 209) 00:08:56.719 9931.225 - 9981.637: 69.5781% ( 187) 00:08:56.719 9981.637 - 10032.049: 71.0234% ( 185) 00:08:56.719 10032.049 - 10082.462: 72.5156% ( 191) 00:08:56.719 10082.462 - 10132.874: 73.6719% ( 148) 00:08:56.719 10132.874 - 10183.286: 74.9297% ( 161) 00:08:56.719 10183.286 - 10233.698: 75.9766% ( 134) 00:08:56.719 10233.698 - 10284.111: 76.9219% ( 121) 00:08:56.719 10284.111 - 10334.523: 77.9375% ( 130) 00:08:56.719 10334.523 - 10384.935: 78.8438% ( 116) 00:08:56.719 10384.935 - 10435.348: 79.8281% ( 126) 00:08:56.719 10435.348 - 10485.760: 80.5469% ( 92) 00:08:56.719 10485.760 - 10536.172: 81.1953% ( 83) 00:08:56.719 10536.172 - 10586.585: 81.8125% ( 79) 00:08:56.719 10586.585 - 10636.997: 82.3594% ( 70) 00:08:56.719 10636.997 - 10687.409: 82.8047% ( 57) 00:08:56.719 10687.409 - 10737.822: 83.2188% ( 53) 00:08:56.719 10737.822 - 10788.234: 83.5391% ( 41) 00:08:56.719 10788.234 - 10838.646: 84.0156% ( 61) 00:08:56.719 10838.646 - 10889.058: 84.4531% ( 56) 00:08:56.719 10889.058 - 10939.471: 84.8672% ( 53) 00:08:56.719 10939.471 - 10989.883: 85.2344% ( 47) 00:08:56.719 10989.883 - 11040.295: 85.6250% ( 50) 00:08:56.719 11040.295 - 11090.708: 86.0703% ( 57) 00:08:56.719 11090.708 - 11141.120: 86.5078% ( 56) 00:08:56.719 11141.120 - 11191.532: 86.9844% ( 61) 00:08:56.719 11191.532 - 11241.945: 87.4844% ( 64) 00:08:56.719 11241.945 - 11292.357: 87.9453% ( 59) 00:08:56.719 11292.357 - 11342.769: 88.3594% ( 53) 00:08:56.719 11342.769 - 11393.182: 88.9219% ( 72) 00:08:56.719 11393.182 - 11443.594: 89.4297% ( 65) 00:08:56.719 11443.594 - 11494.006: 90.0547% ( 80) 00:08:56.719 11494.006 - 11544.418: 90.6250% ( 73) 00:08:56.719 11544.418 - 11594.831: 91.1641% ( 69) 00:08:56.719 11594.831 - 11645.243: 91.5781% ( 53) 00:08:56.719 11645.243 - 11695.655: 92.1328% ( 71) 00:08:56.719 11695.655 - 11746.068: 92.5938% ( 59) 00:08:56.719 11746.068 - 11796.480: 93.0312% ( 56) 00:08:56.719 11796.480 - 11846.892: 93.4297% ( 51) 00:08:56.719 11846.892 - 11897.305: 93.6406% ( 27) 00:08:56.719 11897.305 - 11947.717: 93.9219% ( 36) 00:08:56.719 11947.717 - 11998.129: 94.1328% ( 27) 00:08:56.719 11998.129 - 12048.542: 94.3281% ( 25) 00:08:56.719 12048.542 - 12098.954: 94.5781% ( 32) 00:08:56.719 12098.954 - 12149.366: 94.7188% ( 18) 00:08:56.719 12149.366 - 12199.778: 94.8594% ( 18) 00:08:56.719 12199.778 - 12250.191: 94.9844% ( 16) 00:08:56.719 12250.191 - 12300.603: 95.0938% ( 14) 00:08:56.719 12300.603 - 12351.015: 95.1797% ( 11) 00:08:56.719 12351.015 - 12401.428: 95.2734% ( 12) 00:08:56.719 12401.428 - 12451.840: 95.3828% ( 14) 00:08:56.719 12451.840 - 12502.252: 95.4922% ( 14) 00:08:56.719 12502.252 - 12552.665: 95.6562% ( 21) 00:08:56.719 12552.665 - 12603.077: 95.7656% ( 14) 00:08:56.719 12603.077 - 12653.489: 95.8906% ( 16) 00:08:56.719 12653.489 - 12703.902: 96.0391% ( 19) 00:08:56.719 12703.902 - 12754.314: 96.2578% ( 28) 00:08:56.719 12754.314 - 12804.726: 96.4375% ( 23) 00:08:56.719 12804.726 - 12855.138: 96.5469% ( 14) 00:08:56.719 12855.138 - 12905.551: 96.6875% ( 18) 00:08:56.719 12905.551 - 13006.375: 96.8984% ( 27) 00:08:56.719 13006.375 - 13107.200: 97.0391% ( 18) 00:08:56.719 13107.200 - 13208.025: 97.1250% ( 11) 00:08:56.719 13208.025 - 13308.849: 97.2031% ( 10) 00:08:56.719 13308.849 - 13409.674: 97.2969% ( 12) 00:08:56.719 13409.674 - 13510.498: 97.3828% ( 11) 00:08:56.719 13510.498 - 13611.323: 97.4766% ( 12) 00:08:56.719 13611.323 - 13712.148: 97.5859% ( 14) 00:08:56.719 13712.148 - 13812.972: 97.6719% ( 11) 00:08:56.719 13812.972 - 13913.797: 97.7422% ( 9) 00:08:56.719 13913.797 - 14014.622: 97.8438% ( 13) 00:08:56.719 14014.622 - 14115.446: 97.8828% ( 5) 00:08:56.719 14115.446 - 14216.271: 97.9297% ( 6) 00:08:56.719 14216.271 - 14317.095: 97.9688% ( 5) 00:08:56.719 14317.095 - 14417.920: 98.0000% ( 4) 00:08:56.719 16131.938 - 16232.763: 98.0547% ( 7) 00:08:56.719 16232.763 - 16333.588: 98.1094% ( 7) 00:08:56.719 16333.588 - 16434.412: 98.1641% ( 7) 00:08:56.719 16434.412 - 16535.237: 98.3438% ( 23) 00:08:56.719 16535.237 - 16636.062: 98.3984% ( 7) 00:08:56.719 16636.062 - 16736.886: 98.4375% ( 5) 00:08:56.719 16736.886 - 16837.711: 98.4766% ( 5) 00:08:56.719 16837.711 - 16938.535: 98.5000% ( 3) 00:08:56.719 16938.535 - 17039.360: 98.5234% ( 3) 00:08:56.719 17039.360 - 17140.185: 98.5703% ( 6) 00:08:56.719 17140.185 - 17241.009: 98.6172% ( 6) 00:08:56.719 17241.009 - 17341.834: 98.6562% ( 5) 00:08:56.719 17341.834 - 17442.658: 98.7031% ( 6) 00:08:56.719 17442.658 - 17543.483: 98.7500% ( 6) 00:08:56.719 17543.483 - 17644.308: 98.7969% ( 6) 00:08:56.719 17644.308 - 17745.132: 98.8438% ( 6) 00:08:56.719 17745.132 - 17845.957: 98.8906% ( 6) 00:08:56.719 17845.957 - 17946.782: 98.9375% ( 6) 00:08:56.719 17946.782 - 18047.606: 98.9844% ( 6) 00:08:56.719 18047.606 - 18148.431: 99.0000% ( 2) 00:08:56.719 21173.169 - 21273.994: 99.0078% ( 1) 00:08:56.719 21273.994 - 21374.818: 99.0391% ( 4) 00:08:56.719 21374.818 - 21475.643: 99.0859% ( 6) 00:08:56.719 21475.643 - 21576.468: 99.1250% ( 5) 00:08:56.719 21576.468 - 21677.292: 99.1641% ( 5) 00:08:56.719 21677.292 - 21778.117: 99.2031% ( 5) 00:08:56.719 21778.117 - 21878.942: 99.2266% ( 3) 00:08:56.719 21878.942 - 21979.766: 99.2578% ( 4) 00:08:56.719 21979.766 - 22080.591: 99.2812% ( 3) 00:08:56.719 22080.591 - 22181.415: 99.3125% ( 4) 00:08:56.719 22181.415 - 22282.240: 99.3359% ( 3) 00:08:56.719 22282.240 - 22383.065: 99.3594% ( 3) 00:08:56.719 22383.065 - 22483.889: 99.3828% ( 3) 00:08:56.719 22483.889 - 22584.714: 99.4062% ( 3) 00:08:56.719 22584.714 - 22685.538: 99.4375% ( 4) 00:08:56.719 22685.538 - 22786.363: 99.4531% ( 2) 00:08:56.719 22786.363 - 22887.188: 99.4766% ( 3) 00:08:56.719 22887.188 - 22988.012: 99.5000% ( 3) 00:08:56.719 26214.400 - 26416.049: 99.5078% ( 1) 00:08:56.719 26416.049 - 26617.698: 99.5859% ( 10) 00:08:56.719 26617.698 - 26819.348: 99.6875% ( 13) 00:08:56.719 27625.945 - 27827.594: 99.7188% ( 4) 00:08:56.719 27827.594 - 28029.243: 99.7656% ( 6) 00:08:56.719 28029.243 - 28230.892: 99.8203% ( 7) 00:08:56.719 28230.892 - 28432.542: 99.8672% ( 6) 00:08:56.719 28432.542 - 28634.191: 99.9219% ( 7) 00:08:56.719 28634.191 - 28835.840: 99.9766% ( 7) 00:08:56.719 28835.840 - 29037.489: 100.0000% ( 3) 00:08:56.719 00:08:56.719 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:56.719 ============================================================================== 00:08:56.720 Range in us Cumulative IO count 00:08:56.720 7713.083 - 7763.495: 0.0078% ( 1) 00:08:56.720 7813.908 - 7864.320: 0.0156% ( 1) 00:08:56.720 7864.320 - 7914.732: 0.0234% ( 1) 00:08:56.720 7914.732 - 7965.145: 0.0625% ( 5) 00:08:56.720 7965.145 - 8015.557: 0.2344% ( 22) 00:08:56.720 8015.557 - 8065.969: 0.4453% ( 27) 00:08:56.720 8065.969 - 8116.382: 0.7109% ( 34) 00:08:56.720 8116.382 - 8166.794: 1.1016% ( 50) 00:08:56.720 8166.794 - 8217.206: 1.6172% ( 66) 00:08:56.720 8217.206 - 8267.618: 2.3281% ( 91) 00:08:56.720 8267.618 - 8318.031: 3.0938% ( 98) 00:08:56.720 8318.031 - 8368.443: 3.7109% ( 79) 00:08:56.720 8368.443 - 8418.855: 4.3906% ( 87) 00:08:56.720 8418.855 - 8469.268: 5.5078% ( 143) 00:08:56.720 8469.268 - 8519.680: 6.6094% ( 141) 00:08:56.720 8519.680 - 8570.092: 7.7422% ( 145) 00:08:56.720 8570.092 - 8620.505: 8.6953% ( 122) 00:08:56.720 8620.505 - 8670.917: 9.8672% ( 150) 00:08:56.720 8670.917 - 8721.329: 11.4766% ( 206) 00:08:56.720 8721.329 - 8771.742: 12.7422% ( 162) 00:08:56.720 8771.742 - 8822.154: 13.9922% ( 160) 00:08:56.720 8822.154 - 8872.566: 15.5312% ( 197) 00:08:56.720 8872.566 - 8922.978: 17.6250% ( 268) 00:08:56.720 8922.978 - 8973.391: 19.3516% ( 221) 00:08:56.720 8973.391 - 9023.803: 21.4219% ( 265) 00:08:56.720 9023.803 - 9074.215: 24.0000% ( 330) 00:08:56.720 9074.215 - 9124.628: 27.1562% ( 404) 00:08:56.720 9124.628 - 9175.040: 30.1172% ( 379) 00:08:56.720 9175.040 - 9225.452: 32.9219% ( 359) 00:08:56.720 9225.452 - 9275.865: 35.7031% ( 356) 00:08:56.720 9275.865 - 9326.277: 39.0000% ( 422) 00:08:56.720 9326.277 - 9376.689: 42.2969% ( 422) 00:08:56.720 9376.689 - 9427.102: 45.5781% ( 420) 00:08:56.720 9427.102 - 9477.514: 48.7891% ( 411) 00:08:56.720 9477.514 - 9527.926: 51.7422% ( 378) 00:08:56.720 9527.926 - 9578.338: 54.3047% ( 328) 00:08:56.720 9578.338 - 9628.751: 56.8984% ( 332) 00:08:56.720 9628.751 - 9679.163: 59.4766% ( 330) 00:08:56.720 9679.163 - 9729.575: 61.4219% ( 249) 00:08:56.720 9729.575 - 9779.988: 63.5625% ( 274) 00:08:56.720 9779.988 - 9830.400: 65.5312% ( 252) 00:08:56.720 9830.400 - 9880.812: 67.1172% ( 203) 00:08:56.720 9880.812 - 9931.225: 68.8984% ( 228) 00:08:56.720 9931.225 - 9981.637: 70.4062% ( 193) 00:08:56.720 9981.637 - 10032.049: 71.9219% ( 194) 00:08:56.720 10032.049 - 10082.462: 73.3125% ( 178) 00:08:56.720 10082.462 - 10132.874: 74.5625% ( 160) 00:08:56.720 10132.874 - 10183.286: 75.5859% ( 131) 00:08:56.720 10183.286 - 10233.698: 76.4844% ( 115) 00:08:56.720 10233.698 - 10284.111: 77.2266% ( 95) 00:08:56.720 10284.111 - 10334.523: 78.0469% ( 105) 00:08:56.720 10334.523 - 10384.935: 78.8203% ( 99) 00:08:56.720 10384.935 - 10435.348: 79.5234% ( 90) 00:08:56.720 10435.348 - 10485.760: 80.3281% ( 103) 00:08:56.720 10485.760 - 10536.172: 81.0625% ( 94) 00:08:56.720 10536.172 - 10586.585: 81.7969% ( 94) 00:08:56.720 10586.585 - 10636.997: 82.3281% ( 68) 00:08:56.720 10636.997 - 10687.409: 82.8516% ( 67) 00:08:56.720 10687.409 - 10737.822: 83.3281% ( 61) 00:08:56.720 10737.822 - 10788.234: 83.8828% ( 71) 00:08:56.720 10788.234 - 10838.646: 84.3672% ( 62) 00:08:56.720 10838.646 - 10889.058: 84.8516% ( 62) 00:08:56.720 10889.058 - 10939.471: 85.2031% ( 45) 00:08:56.720 10939.471 - 10989.883: 85.5312% ( 42) 00:08:56.720 10989.883 - 11040.295: 85.8828% ( 45) 00:08:56.720 11040.295 - 11090.708: 86.3359% ( 58) 00:08:56.720 11090.708 - 11141.120: 86.6953% ( 46) 00:08:56.720 11141.120 - 11191.532: 87.2656% ( 73) 00:08:56.720 11191.532 - 11241.945: 87.6094% ( 44) 00:08:56.720 11241.945 - 11292.357: 87.9609% ( 45) 00:08:56.720 11292.357 - 11342.769: 88.3281% ( 47) 00:08:56.720 11342.769 - 11393.182: 88.5938% ( 34) 00:08:56.720 11393.182 - 11443.594: 88.9062% ( 40) 00:08:56.720 11443.594 - 11494.006: 89.2188% ( 40) 00:08:56.720 11494.006 - 11544.418: 89.5938% ( 48) 00:08:56.720 11544.418 - 11594.831: 90.0781% ( 62) 00:08:56.720 11594.831 - 11645.243: 90.6016% ( 67) 00:08:56.720 11645.243 - 11695.655: 91.2344% ( 81) 00:08:56.720 11695.655 - 11746.068: 91.8516% ( 79) 00:08:56.720 11746.068 - 11796.480: 92.2578% ( 52) 00:08:56.720 11796.480 - 11846.892: 92.6641% ( 52) 00:08:56.720 11846.892 - 11897.305: 93.1953% ( 68) 00:08:56.720 11897.305 - 11947.717: 93.4844% ( 37) 00:08:56.720 11947.717 - 11998.129: 93.8438% ( 46) 00:08:56.720 11998.129 - 12048.542: 94.0703% ( 29) 00:08:56.720 12048.542 - 12098.954: 94.2500% ( 23) 00:08:56.720 12098.954 - 12149.366: 94.4219% ( 22) 00:08:56.720 12149.366 - 12199.778: 94.5391% ( 15) 00:08:56.720 12199.778 - 12250.191: 94.6875% ( 19) 00:08:56.720 12250.191 - 12300.603: 94.8828% ( 25) 00:08:56.720 12300.603 - 12351.015: 95.0625% ( 23) 00:08:56.720 12351.015 - 12401.428: 95.2344% ( 22) 00:08:56.720 12401.428 - 12451.840: 95.4297% ( 25) 00:08:56.720 12451.840 - 12502.252: 95.6328% ( 26) 00:08:56.720 12502.252 - 12552.665: 95.7891% ( 20) 00:08:56.720 12552.665 - 12603.077: 95.9922% ( 26) 00:08:56.720 12603.077 - 12653.489: 96.1406% ( 19) 00:08:56.720 12653.489 - 12703.902: 96.3281% ( 24) 00:08:56.720 12703.902 - 12754.314: 96.4375% ( 14) 00:08:56.720 12754.314 - 12804.726: 96.5625% ( 16) 00:08:56.720 12804.726 - 12855.138: 96.6719% ( 14) 00:08:56.720 12855.138 - 12905.551: 96.7812% ( 14) 00:08:56.720 12905.551 - 13006.375: 96.9922% ( 27) 00:08:56.720 13006.375 - 13107.200: 97.1875% ( 25) 00:08:56.720 13107.200 - 13208.025: 97.3359% ( 19) 00:08:56.720 13208.025 - 13308.849: 97.4375% ( 13) 00:08:56.720 13308.849 - 13409.674: 97.4844% ( 6) 00:08:56.720 13409.674 - 13510.498: 97.5000% ( 2) 00:08:56.720 13611.323 - 13712.148: 97.5078% ( 1) 00:08:56.720 13712.148 - 13812.972: 97.5703% ( 8) 00:08:56.720 13812.972 - 13913.797: 97.6875% ( 15) 00:08:56.720 13913.797 - 14014.622: 97.8203% ( 17) 00:08:56.720 14014.622 - 14115.446: 97.9141% ( 12) 00:08:56.720 14115.446 - 14216.271: 97.9844% ( 9) 00:08:56.720 14216.271 - 14317.095: 98.0000% ( 2) 00:08:56.720 15930.289 - 16031.114: 98.0391% ( 5) 00:08:56.720 16031.114 - 16131.938: 98.0859% ( 6) 00:08:56.720 16131.938 - 16232.763: 98.1328% ( 6) 00:08:56.720 16232.763 - 16333.588: 98.1797% ( 6) 00:08:56.720 16333.588 - 16434.412: 98.2188% ( 5) 00:08:56.720 16434.412 - 16535.237: 98.2734% ( 7) 00:08:56.720 16535.237 - 16636.062: 98.3203% ( 6) 00:08:56.720 16636.062 - 16736.886: 98.3672% ( 6) 00:08:56.720 16736.886 - 16837.711: 98.4141% ( 6) 00:08:56.720 16837.711 - 16938.535: 98.4766% ( 8) 00:08:56.720 16938.535 - 17039.360: 98.5781% ( 13) 00:08:56.720 17039.360 - 17140.185: 98.6406% ( 8) 00:08:56.720 17140.185 - 17241.009: 98.7109% ( 9) 00:08:56.720 17241.009 - 17341.834: 98.8594% ( 19) 00:08:56.720 17341.834 - 17442.658: 98.9062% ( 6) 00:08:56.720 17442.658 - 17543.483: 98.9453% ( 5) 00:08:56.720 17543.483 - 17644.308: 98.9844% ( 5) 00:08:56.720 17644.308 - 17745.132: 99.0000% ( 2) 00:08:56.720 19761.625 - 19862.449: 99.0469% ( 6) 00:08:56.720 19862.449 - 19963.274: 99.0938% ( 6) 00:08:56.720 19963.274 - 20064.098: 99.1172% ( 3) 00:08:56.720 20064.098 - 20164.923: 99.1562% ( 5) 00:08:56.720 20164.923 - 20265.748: 99.1875% ( 4) 00:08:56.720 20265.748 - 20366.572: 99.2188% ( 4) 00:08:56.720 20366.572 - 20467.397: 99.2344% ( 2) 00:08:56.720 20467.397 - 20568.222: 99.2656% ( 4) 00:08:56.720 20568.222 - 20669.046: 99.2891% ( 3) 00:08:56.720 20669.046 - 20769.871: 99.3203% ( 4) 00:08:56.720 20769.871 - 20870.695: 99.3438% ( 3) 00:08:56.720 20870.695 - 20971.520: 99.3672% ( 3) 00:08:56.720 20971.520 - 21072.345: 99.3984% ( 4) 00:08:56.720 21072.345 - 21173.169: 99.4219% ( 3) 00:08:56.720 21173.169 - 21273.994: 99.4453% ( 3) 00:08:56.720 21273.994 - 21374.818: 99.4766% ( 4) 00:08:56.720 21374.818 - 21475.643: 99.5000% ( 3) 00:08:56.720 24601.206 - 24702.031: 99.5078% ( 1) 00:08:56.720 24702.031 - 24802.855: 99.5469% ( 5) 00:08:56.720 24802.855 - 24903.680: 99.6094% ( 8) 00:08:56.720 24903.680 - 25004.505: 99.7344% ( 16) 00:08:56.720 25004.505 - 25105.329: 99.7656% ( 4) 00:08:56.720 25306.978 - 25407.803: 99.7734% ( 1) 00:08:56.720 26012.751 - 26214.400: 99.7969% ( 3) 00:08:56.720 26214.400 - 26416.049: 99.8438% ( 6) 00:08:56.720 26416.049 - 26617.698: 99.8984% ( 7) 00:08:56.720 26617.698 - 26819.348: 99.9453% ( 6) 00:08:56.720 26819.348 - 27020.997: 100.0000% ( 7) 00:08:56.720 00:08:56.720 ************************************ 00:08:56.720 END TEST nvme_perf 00:08:56.720 ************************************ 00:08:56.720 12:12:27 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:56.720 00:08:56.720 real 0m2.545s 00:08:56.720 user 0m2.217s 00:08:56.720 sys 0m0.226s 00:08:56.720 12:12:27 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.720 12:12:27 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:56.720 12:12:27 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:56.720 12:12:27 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:56.720 12:12:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.720 12:12:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.720 ************************************ 00:08:56.720 START TEST nvme_hello_world 00:08:56.720 ************************************ 00:08:56.720 12:12:27 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:56.978 Initializing NVMe Controllers 00:08:56.978 Attached to 0000:00:10.0 00:08:56.978 Namespace ID: 1 size: 6GB 00:08:56.978 Attached to 0000:00:11.0 00:08:56.978 Namespace ID: 1 size: 5GB 00:08:56.978 Attached to 0000:00:13.0 00:08:56.978 Namespace ID: 1 size: 1GB 00:08:56.978 Attached to 0000:00:12.0 00:08:56.978 Namespace ID: 1 size: 4GB 00:08:56.978 Namespace ID: 2 size: 4GB 00:08:56.978 Namespace ID: 3 size: 4GB 00:08:56.978 Initialization complete. 00:08:56.978 INFO: using host memory buffer for IO 00:08:56.978 Hello world! 00:08:56.978 INFO: using host memory buffer for IO 00:08:56.978 Hello world! 00:08:56.978 INFO: using host memory buffer for IO 00:08:56.978 Hello world! 00:08:56.978 INFO: using host memory buffer for IO 00:08:56.978 Hello world! 00:08:56.978 INFO: using host memory buffer for IO 00:08:56.978 Hello world! 00:08:56.978 INFO: using host memory buffer for IO 00:08:56.978 Hello world! 00:08:56.978 ************************************ 00:08:56.978 END TEST nvme_hello_world 00:08:56.978 ************************************ 00:08:56.978 00:08:56.978 real 0m0.320s 00:08:56.978 user 0m0.166s 00:08:56.978 sys 0m0.111s 00:08:56.978 12:12:27 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.978 12:12:27 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:56.978 12:12:27 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:56.978 12:12:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.978 12:12:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.978 12:12:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.978 ************************************ 00:08:56.978 START TEST nvme_sgl 00:08:56.978 ************************************ 00:08:56.978 12:12:27 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:57.236 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:57.236 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:57.236 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:57.236 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:57.236 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:57.236 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:57.236 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:57.236 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:57.236 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:57.236 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:57.236 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:57.236 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:57.236 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:57.236 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:57.236 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:57.236 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:57.236 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:57.236 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:57.236 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:57.236 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:57.236 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:57.236 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:57.236 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:57.236 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:57.236 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:57.236 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:57.236 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:57.236 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:57.236 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:57.236 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:57.236 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:57.236 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:57.236 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:57.236 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:57.236 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:57.236 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:57.236 NVMe Readv/Writev Request test 00:08:57.236 Attached to 0000:00:10.0 00:08:57.236 Attached to 0000:00:11.0 00:08:57.236 Attached to 0000:00:13.0 00:08:57.236 Attached to 0000:00:12.0 00:08:57.236 0000:00:10.0: build_io_request_2 test passed 00:08:57.236 0000:00:10.0: build_io_request_4 test passed 00:08:57.236 0000:00:10.0: build_io_request_5 test passed 00:08:57.236 0000:00:10.0: build_io_request_6 test passed 00:08:57.236 0000:00:10.0: build_io_request_7 test passed 00:08:57.236 0000:00:10.0: build_io_request_10 test passed 00:08:57.236 0000:00:11.0: build_io_request_2 test passed 00:08:57.236 0000:00:11.0: build_io_request_4 test passed 00:08:57.236 0000:00:11.0: build_io_request_5 test passed 00:08:57.236 0000:00:11.0: build_io_request_6 test passed 00:08:57.236 0000:00:11.0: build_io_request_7 test passed 00:08:57.236 0000:00:11.0: build_io_request_10 test passed 00:08:57.236 Cleaning up... 00:08:57.236 ************************************ 00:08:57.236 END TEST nvme_sgl 00:08:57.236 ************************************ 00:08:57.236 00:08:57.236 real 0m0.308s 00:08:57.236 user 0m0.154s 00:08:57.236 sys 0m0.107s 00:08:57.236 12:12:28 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.236 12:12:28 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:57.236 12:12:28 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:57.236 12:12:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.236 12:12:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.236 12:12:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.236 ************************************ 00:08:57.236 START TEST nvme_e2edp 00:08:57.236 ************************************ 00:08:57.236 12:12:28 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:57.494 NVMe Write/Read with End-to-End data protection test 00:08:57.494 Attached to 0000:00:10.0 00:08:57.494 Attached to 0000:00:11.0 00:08:57.494 Attached to 0000:00:13.0 00:08:57.494 Attached to 0000:00:12.0 00:08:57.494 Cleaning up... 00:08:57.494 ************************************ 00:08:57.494 END TEST nvme_e2edp 00:08:57.494 ************************************ 00:08:57.494 00:08:57.494 real 0m0.234s 00:08:57.494 user 0m0.080s 00:08:57.494 sys 0m0.098s 00:08:57.494 12:12:28 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.494 12:12:28 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:57.494 12:12:28 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:57.494 12:12:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.494 12:12:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.494 12:12:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.494 ************************************ 00:08:57.494 START TEST nvme_reserve 00:08:57.494 ************************************ 00:08:57.494 12:12:28 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:57.751 ===================================================== 00:08:57.751 NVMe Controller at PCI bus 0, device 16, function 0 00:08:57.751 ===================================================== 00:08:57.751 Reservations: Not Supported 00:08:57.751 ===================================================== 00:08:57.751 NVMe Controller at PCI bus 0, device 17, function 0 00:08:57.751 ===================================================== 00:08:57.751 Reservations: Not Supported 00:08:57.751 ===================================================== 00:08:57.751 NVMe Controller at PCI bus 0, device 19, function 0 00:08:57.751 ===================================================== 00:08:57.751 Reservations: Not Supported 00:08:57.751 ===================================================== 00:08:57.751 NVMe Controller at PCI bus 0, device 18, function 0 00:08:57.751 ===================================================== 00:08:57.751 Reservations: Not Supported 00:08:57.751 Reservation test passed 00:08:57.751 ************************************ 00:08:57.751 END TEST nvme_reserve 00:08:57.751 ************************************ 00:08:57.751 00:08:57.751 real 0m0.216s 00:08:57.751 user 0m0.074s 00:08:57.751 sys 0m0.101s 00:08:57.751 12:12:28 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.752 12:12:28 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:57.752 12:12:28 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:57.752 12:12:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.752 12:12:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.752 12:12:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.752 ************************************ 00:08:57.752 START TEST nvme_err_injection 00:08:57.752 ************************************ 00:08:57.752 12:12:28 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:58.009 NVMe Error Injection test 00:08:58.009 Attached to 0000:00:10.0 00:08:58.009 Attached to 0000:00:11.0 00:08:58.009 Attached to 0000:00:13.0 00:08:58.009 Attached to 0000:00:12.0 00:08:58.009 0000:00:10.0: get features failed as expected 00:08:58.009 0000:00:11.0: get features failed as expected 00:08:58.009 0000:00:13.0: get features failed as expected 00:08:58.009 0000:00:12.0: get features failed as expected 00:08:58.009 0000:00:10.0: get features successfully as expected 00:08:58.009 0000:00:11.0: get features successfully as expected 00:08:58.009 0000:00:13.0: get features successfully as expected 00:08:58.009 0000:00:12.0: get features successfully as expected 00:08:58.009 0000:00:10.0: read failed as expected 00:08:58.009 0000:00:11.0: read failed as expected 00:08:58.009 0000:00:13.0: read failed as expected 00:08:58.009 0000:00:12.0: read failed as expected 00:08:58.009 0000:00:10.0: read successfully as expected 00:08:58.009 0000:00:11.0: read successfully as expected 00:08:58.009 0000:00:13.0: read successfully as expected 00:08:58.009 0000:00:12.0: read successfully as expected 00:08:58.009 Cleaning up... 00:08:58.009 ************************************ 00:08:58.009 END TEST nvme_err_injection 00:08:58.009 ************************************ 00:08:58.009 00:08:58.009 real 0m0.238s 00:08:58.009 user 0m0.092s 00:08:58.009 sys 0m0.100s 00:08:58.009 12:12:28 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.009 12:12:28 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:58.009 12:12:28 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:58.009 12:12:28 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:58.009 12:12:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.009 12:12:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.009 ************************************ 00:08:58.009 START TEST nvme_overhead 00:08:58.009 ************************************ 00:08:58.009 12:12:28 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:59.416 Initializing NVMe Controllers 00:08:59.416 Attached to 0000:00:10.0 00:08:59.416 Attached to 0000:00:11.0 00:08:59.416 Attached to 0000:00:13.0 00:08:59.416 Attached to 0000:00:12.0 00:08:59.416 Initialization complete. Launching workers. 00:08:59.416 submit (in ns) avg, min, max = 13429.2, 11426.9, 102197.7 00:08:59.416 complete (in ns) avg, min, max = 8982.9, 7487.7, 335323.8 00:08:59.416 00:08:59.416 Submit histogram 00:08:59.416 ================ 00:08:59.416 Range in us Cumulative Count 00:08:59.416 11.422 - 11.471: 0.0156% ( 1) 00:08:59.416 11.569 - 11.618: 0.0311% ( 1) 00:08:59.416 11.618 - 11.668: 0.0623% ( 2) 00:08:59.416 11.668 - 11.717: 0.1556% ( 6) 00:08:59.416 11.717 - 11.766: 0.2490% ( 6) 00:08:59.416 11.766 - 11.815: 0.3735% ( 8) 00:08:59.416 11.815 - 11.865: 0.5603% ( 12) 00:08:59.416 11.865 - 11.914: 0.9805% ( 27) 00:08:59.416 11.914 - 11.963: 1.6654% ( 44) 00:08:59.416 11.963 - 12.012: 2.7393% ( 69) 00:08:59.416 12.012 - 12.062: 4.4514% ( 110) 00:08:59.416 12.062 - 12.111: 7.8755% ( 220) 00:08:59.416 12.111 - 12.160: 12.2335% ( 280) 00:08:59.416 12.160 - 12.209: 18.2568% ( 387) 00:08:59.416 12.209 - 12.258: 25.1206% ( 441) 00:08:59.416 12.258 - 12.308: 32.7315% ( 489) 00:08:59.416 12.308 - 12.357: 40.0000% ( 467) 00:08:59.416 12.357 - 12.406: 47.3307% ( 471) 00:08:59.416 12.406 - 12.455: 53.2296% ( 379) 00:08:59.416 12.455 - 12.505: 58.3969% ( 332) 00:08:59.416 12.505 - 12.554: 62.3191% ( 252) 00:08:59.416 12.554 - 12.603: 66.2412% ( 252) 00:08:59.416 12.603 - 12.702: 72.5759% ( 407) 00:08:59.416 12.702 - 12.800: 77.7121% ( 330) 00:08:59.416 12.800 - 12.898: 81.5097% ( 244) 00:08:59.416 12.898 - 12.997: 83.8288% ( 149) 00:08:59.416 12.997 - 13.095: 85.5097% ( 108) 00:08:59.416 13.095 - 13.194: 86.1790% ( 43) 00:08:59.416 13.194 - 13.292: 86.6459% ( 30) 00:08:59.416 13.292 - 13.391: 86.8638% ( 14) 00:08:59.416 13.391 - 13.489: 87.1284% ( 17) 00:08:59.416 13.489 - 13.588: 87.2840% ( 10) 00:08:59.416 13.588 - 13.686: 87.4241% ( 9) 00:08:59.416 13.686 - 13.785: 87.5019% ( 5) 00:08:59.416 13.785 - 13.883: 87.5798% ( 5) 00:08:59.416 13.883 - 13.982: 87.6265% ( 3) 00:08:59.416 13.982 - 14.080: 87.7198% ( 6) 00:08:59.416 14.080 - 14.178: 87.7510% ( 2) 00:08:59.416 14.178 - 14.277: 87.8755% ( 8) 00:08:59.416 14.277 - 14.375: 88.0311% ( 10) 00:08:59.416 14.375 - 14.474: 88.2023% ( 11) 00:08:59.416 14.474 - 14.572: 88.3735% ( 11) 00:08:59.416 14.572 - 14.671: 88.5292% ( 10) 00:08:59.416 14.671 - 14.769: 88.7004% ( 11) 00:08:59.416 14.769 - 14.868: 88.8093% ( 7) 00:08:59.416 14.868 - 14.966: 88.9805% ( 11) 00:08:59.416 14.966 - 15.065: 89.2451% ( 17) 00:08:59.416 15.065 - 15.163: 89.4942% ( 16) 00:08:59.416 15.163 - 15.262: 89.7432% ( 16) 00:08:59.416 15.262 - 15.360: 89.9767% ( 15) 00:08:59.416 15.360 - 15.458: 90.2568% ( 18) 00:08:59.416 15.458 - 15.557: 90.5214% ( 17) 00:08:59.416 15.557 - 15.655: 90.6459% ( 8) 00:08:59.416 15.655 - 15.754: 90.8327% ( 12) 00:08:59.416 15.754 - 15.852: 91.0973% ( 17) 00:08:59.416 15.852 - 15.951: 91.3152% ( 14) 00:08:59.416 15.951 - 16.049: 91.4864% ( 11) 00:08:59.416 16.049 - 16.148: 91.7977% ( 20) 00:08:59.416 16.148 - 16.246: 92.1089% ( 20) 00:08:59.416 16.246 - 16.345: 92.3580% ( 16) 00:08:59.416 16.345 - 16.443: 92.5914% ( 15) 00:08:59.416 16.443 - 16.542: 92.8872% ( 19) 00:08:59.416 16.542 - 16.640: 93.1362% ( 16) 00:08:59.416 16.640 - 16.738: 93.4319% ( 19) 00:08:59.416 16.738 - 16.837: 93.6342% ( 13) 00:08:59.416 16.837 - 16.935: 93.8988% ( 17) 00:08:59.416 16.935 - 17.034: 94.0389% ( 9) 00:08:59.416 17.034 - 17.132: 94.2879% ( 16) 00:08:59.416 17.132 - 17.231: 94.4747% ( 12) 00:08:59.416 17.231 - 17.329: 94.6459% ( 11) 00:08:59.416 17.329 - 17.428: 94.8171% ( 11) 00:08:59.416 17.428 - 17.526: 94.9105% ( 6) 00:08:59.416 17.526 - 17.625: 94.9572% ( 3) 00:08:59.416 17.625 - 17.723: 95.1128% ( 10) 00:08:59.416 17.723 - 17.822: 95.1907% ( 5) 00:08:59.416 17.822 - 17.920: 95.2374% ( 3) 00:08:59.416 17.920 - 18.018: 95.2996% ( 4) 00:08:59.416 18.018 - 18.117: 95.3463% ( 3) 00:08:59.416 18.117 - 18.215: 95.4086% ( 4) 00:08:59.416 18.215 - 18.314: 95.4553% ( 3) 00:08:59.416 18.314 - 18.412: 95.5175% ( 4) 00:08:59.416 18.412 - 18.511: 95.6265% ( 7) 00:08:59.416 18.511 - 18.609: 95.7354% ( 7) 00:08:59.416 18.609 - 18.708: 95.8132% ( 5) 00:08:59.416 18.708 - 18.806: 95.8755% ( 4) 00:08:59.416 18.806 - 18.905: 96.0156% ( 9) 00:08:59.416 18.905 - 19.003: 96.1401% ( 8) 00:08:59.416 19.003 - 19.102: 96.2023% ( 4) 00:08:59.416 19.102 - 19.200: 96.2335% ( 2) 00:08:59.416 19.200 - 19.298: 96.3268% ( 6) 00:08:59.416 19.298 - 19.397: 96.4514% ( 8) 00:08:59.416 19.397 - 19.495: 96.4669% ( 1) 00:08:59.416 19.495 - 19.594: 96.5136% ( 3) 00:08:59.416 19.594 - 19.692: 96.5447% ( 2) 00:08:59.416 19.692 - 19.791: 96.6381% ( 6) 00:08:59.416 19.791 - 19.889: 96.7160% ( 5) 00:08:59.416 19.889 - 19.988: 96.7626% ( 3) 00:08:59.416 19.988 - 20.086: 96.7938% ( 2) 00:08:59.416 20.086 - 20.185: 96.8716% ( 5) 00:08:59.416 20.185 - 20.283: 96.9494% ( 5) 00:08:59.416 20.283 - 20.382: 96.9805% ( 2) 00:08:59.416 20.382 - 20.480: 96.9961% ( 1) 00:08:59.416 20.480 - 20.578: 97.0117% ( 1) 00:08:59.416 20.578 - 20.677: 97.0739% ( 4) 00:08:59.416 20.677 - 20.775: 97.1051% ( 2) 00:08:59.416 20.874 - 20.972: 97.1829% ( 5) 00:08:59.416 21.071 - 21.169: 97.2296% ( 3) 00:08:59.416 21.268 - 21.366: 97.2607% ( 2) 00:08:59.416 21.366 - 21.465: 97.2763% ( 1) 00:08:59.416 21.465 - 21.563: 97.2918% ( 1) 00:08:59.416 21.563 - 21.662: 97.3230% ( 2) 00:08:59.416 21.662 - 21.760: 97.3385% ( 1) 00:08:59.416 21.760 - 21.858: 97.3541% ( 1) 00:08:59.416 21.858 - 21.957: 97.3852% ( 2) 00:08:59.416 22.646 - 22.745: 97.4008% ( 1) 00:08:59.416 22.745 - 22.843: 97.4163% ( 1) 00:08:59.416 22.942 - 23.040: 97.4319% ( 1) 00:08:59.416 23.434 - 23.532: 97.4475% ( 1) 00:08:59.416 23.729 - 23.828: 97.4630% ( 1) 00:08:59.416 23.828 - 23.926: 97.4942% ( 2) 00:08:59.416 23.926 - 24.025: 97.5097% ( 1) 00:08:59.416 24.123 - 24.222: 97.5253% ( 1) 00:08:59.416 25.600 - 25.797: 97.5409% ( 1) 00:08:59.416 26.585 - 26.782: 97.5564% ( 1) 00:08:59.416 27.372 - 27.569: 97.5720% ( 1) 00:08:59.416 27.569 - 27.766: 97.6031% ( 2) 00:08:59.416 27.766 - 27.963: 97.7276% ( 8) 00:08:59.417 27.963 - 28.160: 97.8677% ( 9) 00:08:59.417 28.160 - 28.357: 97.9300% ( 4) 00:08:59.417 28.357 - 28.554: 97.9455% ( 1) 00:08:59.417 28.554 - 28.751: 97.9611% ( 1) 00:08:59.417 28.751 - 28.948: 97.9767% ( 1) 00:08:59.417 28.948 - 29.145: 98.0078% ( 2) 00:08:59.417 29.932 - 30.129: 98.0233% ( 1) 00:08:59.417 30.720 - 30.917: 98.0700% ( 3) 00:08:59.417 30.917 - 31.114: 98.1479% ( 5) 00:08:59.417 31.114 - 31.311: 98.1634% ( 1) 00:08:59.417 31.311 - 31.508: 98.2568% ( 6) 00:08:59.417 31.508 - 31.705: 98.3813% ( 8) 00:08:59.417 31.705 - 31.902: 98.4436% ( 4) 00:08:59.417 31.902 - 32.098: 98.5992% ( 10) 00:08:59.417 32.098 - 32.295: 98.9105% ( 20) 00:08:59.417 32.295 - 32.492: 99.0039% ( 6) 00:08:59.417 32.492 - 32.689: 99.2840% ( 18) 00:08:59.417 32.689 - 32.886: 99.3619% ( 5) 00:08:59.417 32.886 - 33.083: 99.4086% ( 3) 00:08:59.417 33.083 - 33.280: 99.5175% ( 7) 00:08:59.417 33.280 - 33.477: 99.5486% ( 2) 00:08:59.417 33.477 - 33.674: 99.5642% ( 1) 00:08:59.417 33.674 - 33.871: 99.5798% ( 1) 00:08:59.417 33.871 - 34.068: 99.6109% ( 2) 00:08:59.417 34.658 - 34.855: 99.6265% ( 1) 00:08:59.417 35.249 - 35.446: 99.6576% ( 2) 00:08:59.417 35.446 - 35.643: 99.6732% ( 1) 00:08:59.417 35.643 - 35.840: 99.6887% ( 1) 00:08:59.417 36.234 - 36.431: 99.7043% ( 1) 00:08:59.417 37.809 - 38.006: 99.7198% ( 1) 00:08:59.417 39.582 - 39.778: 99.7354% ( 1) 00:08:59.417 43.126 - 43.323: 99.7510% ( 1) 00:08:59.417 43.914 - 44.111: 99.7821% ( 2) 00:08:59.417 48.837 - 49.034: 99.7977% ( 1) 00:08:59.417 51.988 - 52.382: 99.8132% ( 1) 00:08:59.417 52.775 - 53.169: 99.8288% ( 1) 00:08:59.417 53.169 - 53.563: 99.8444% ( 1) 00:08:59.417 53.957 - 54.351: 99.8599% ( 1) 00:08:59.417 57.502 - 57.895: 99.8755% ( 1) 00:08:59.417 60.258 - 60.652: 99.8911% ( 1) 00:08:59.417 61.440 - 61.834: 99.9066% ( 1) 00:08:59.417 74.043 - 74.437: 99.9222% ( 1) 00:08:59.417 75.225 - 75.618: 99.9377% ( 1) 00:08:59.417 83.495 - 83.889: 99.9533% ( 1) 00:08:59.417 89.009 - 89.403: 99.9689% ( 1) 00:08:59.417 98.068 - 98.462: 99.9844% ( 1) 00:08:59.417 101.612 - 102.400: 100.0000% ( 1) 00:08:59.417 00:08:59.417 Complete histogram 00:08:59.417 ================== 00:08:59.417 Range in us Cumulative Count 00:08:59.417 7.483 - 7.532: 0.1245% ( 8) 00:08:59.417 7.532 - 7.582: 0.3735% ( 16) 00:08:59.417 7.582 - 7.631: 0.5603% ( 12) 00:08:59.417 7.631 - 7.680: 0.7160% ( 10) 00:08:59.417 7.680 - 7.729: 0.8716% ( 10) 00:08:59.417 7.729 - 7.778: 1.0117% ( 9) 00:08:59.417 7.778 - 7.828: 1.0584% ( 3) 00:08:59.417 7.877 - 7.926: 1.2451% ( 12) 00:08:59.417 7.926 - 7.975: 2.6148% ( 88) 00:08:59.417 7.975 - 8.025: 8.8716% ( 402) 00:08:59.417 8.025 - 8.074: 20.7004% ( 760) 00:08:59.417 8.074 - 8.123: 32.8405% ( 780) 00:08:59.417 8.123 - 8.172: 44.1712% ( 728) 00:08:59.417 8.172 - 8.222: 53.3541% ( 590) 00:08:59.417 8.222 - 8.271: 62.0389% ( 558) 00:08:59.417 8.271 - 8.320: 68.3891% ( 408) 00:08:59.417 8.320 - 8.369: 74.1479% ( 370) 00:08:59.417 8.369 - 8.418: 78.2879% ( 266) 00:08:59.417 8.418 - 8.468: 81.1673% ( 185) 00:08:59.417 8.468 - 8.517: 83.6109% ( 157) 00:08:59.417 8.517 - 8.566: 85.4942% ( 121) 00:08:59.417 8.566 - 8.615: 86.5837% ( 70) 00:08:59.417 8.615 - 8.665: 87.3619% ( 50) 00:08:59.417 8.665 - 8.714: 87.7821% ( 27) 00:08:59.417 8.714 - 8.763: 88.1556% ( 24) 00:08:59.417 8.763 - 8.812: 88.5914% ( 28) 00:08:59.417 8.812 - 8.862: 88.8716% ( 18) 00:08:59.417 8.862 - 8.911: 89.1984% ( 21) 00:08:59.417 8.911 - 8.960: 89.5409% ( 22) 00:08:59.417 8.960 - 9.009: 89.7432% ( 13) 00:08:59.417 9.009 - 9.058: 90.0389% ( 19) 00:08:59.417 9.058 - 9.108: 90.2412% ( 13) 00:08:59.417 9.108 - 9.157: 90.4436% ( 13) 00:08:59.417 9.157 - 9.206: 90.5837% ( 9) 00:08:59.417 9.206 - 9.255: 90.8171% ( 15) 00:08:59.417 9.255 - 9.305: 90.8638% ( 3) 00:08:59.417 9.305 - 9.354: 90.9572% ( 6) 00:08:59.417 9.354 - 9.403: 91.0350% ( 5) 00:08:59.417 9.403 - 9.452: 91.1440% ( 7) 00:08:59.417 9.452 - 9.502: 91.2374% ( 6) 00:08:59.417 9.502 - 9.551: 91.2840% ( 3) 00:08:59.417 9.551 - 9.600: 91.3619% ( 5) 00:08:59.417 9.600 - 9.649: 91.4086% ( 3) 00:08:59.417 9.649 - 9.698: 91.4708% ( 4) 00:08:59.417 9.698 - 9.748: 91.5019% ( 2) 00:08:59.417 9.797 - 9.846: 91.5331% ( 2) 00:08:59.417 9.846 - 9.895: 91.5486% ( 1) 00:08:59.417 9.945 - 9.994: 91.5953% ( 3) 00:08:59.417 9.994 - 10.043: 91.6420% ( 3) 00:08:59.417 10.043 - 10.092: 91.6576% ( 1) 00:08:59.417 10.092 - 10.142: 91.6887% ( 2) 00:08:59.417 10.142 - 10.191: 91.7198% ( 2) 00:08:59.417 10.191 - 10.240: 91.7977% ( 5) 00:08:59.417 10.240 - 10.289: 91.8755% ( 5) 00:08:59.417 10.289 - 10.338: 91.9222% ( 3) 00:08:59.417 10.338 - 10.388: 91.9533% ( 2) 00:08:59.417 10.388 - 10.437: 92.0623% ( 7) 00:08:59.417 10.437 - 10.486: 92.1089% ( 3) 00:08:59.417 10.486 - 10.535: 92.1556% ( 3) 00:08:59.417 10.535 - 10.585: 92.2335% ( 5) 00:08:59.417 10.585 - 10.634: 92.2646% ( 2) 00:08:59.417 10.634 - 10.683: 92.2802% ( 1) 00:08:59.417 10.683 - 10.732: 92.3268% ( 3) 00:08:59.417 10.732 - 10.782: 92.3891% ( 4) 00:08:59.417 10.782 - 10.831: 92.4514% ( 4) 00:08:59.417 10.831 - 10.880: 92.4669% ( 1) 00:08:59.417 10.880 - 10.929: 92.4981% ( 2) 00:08:59.417 10.929 - 10.978: 92.5447% ( 3) 00:08:59.417 10.978 - 11.028: 92.5759% ( 2) 00:08:59.417 11.028 - 11.077: 92.6537% ( 5) 00:08:59.417 11.077 - 11.126: 92.7160% ( 4) 00:08:59.417 11.126 - 11.175: 92.8093% ( 6) 00:08:59.417 11.225 - 11.274: 92.8249% ( 1) 00:08:59.417 11.274 - 11.323: 92.8872% ( 4) 00:08:59.417 11.323 - 11.372: 92.9650% ( 5) 00:08:59.417 11.372 - 11.422: 93.0117% ( 3) 00:08:59.417 11.422 - 11.471: 93.1362% ( 8) 00:08:59.417 11.471 - 11.520: 93.2918% ( 10) 00:08:59.417 11.520 - 11.569: 93.4163% ( 8) 00:08:59.417 11.569 - 11.618: 93.5097% ( 6) 00:08:59.417 11.618 - 11.668: 93.6809% ( 11) 00:08:59.417 11.668 - 11.717: 93.7276% ( 3) 00:08:59.417 11.717 - 11.766: 93.8210% ( 6) 00:08:59.417 11.766 - 11.815: 93.9144% ( 6) 00:08:59.417 11.815 - 11.865: 93.9611% ( 3) 00:08:59.417 11.865 - 11.914: 94.0856% ( 8) 00:08:59.417 11.914 - 11.963: 94.2412% ( 10) 00:08:59.417 11.963 - 12.012: 94.3346% ( 6) 00:08:59.417 12.012 - 12.062: 94.3813% ( 3) 00:08:59.417 12.062 - 12.111: 94.4591% ( 5) 00:08:59.417 12.111 - 12.160: 94.5525% ( 6) 00:08:59.417 12.160 - 12.209: 94.6459% ( 6) 00:08:59.417 12.209 - 12.258: 94.7860% ( 9) 00:08:59.417 12.308 - 12.357: 94.8482% ( 4) 00:08:59.417 12.357 - 12.406: 94.9416% ( 6) 00:08:59.417 12.406 - 12.455: 94.9572% ( 1) 00:08:59.417 12.455 - 12.505: 94.9883% ( 2) 00:08:59.417 12.505 - 12.554: 95.0350% ( 3) 00:08:59.417 12.554 - 12.603: 95.1128% ( 5) 00:08:59.417 12.603 - 12.702: 95.1907% ( 5) 00:08:59.417 12.702 - 12.800: 95.3463% ( 10) 00:08:59.417 12.800 - 12.898: 95.4086% ( 4) 00:08:59.417 12.898 - 12.997: 95.4708% ( 4) 00:08:59.417 12.997 - 13.095: 95.5331% ( 4) 00:08:59.417 13.095 - 13.194: 95.5953% ( 4) 00:08:59.417 13.194 - 13.292: 95.6265% ( 2) 00:08:59.417 13.292 - 13.391: 95.6732% ( 3) 00:08:59.417 13.391 - 13.489: 95.7821% ( 7) 00:08:59.417 13.489 - 13.588: 95.7977% ( 1) 00:08:59.417 13.588 - 13.686: 95.8132% ( 1) 00:08:59.417 13.686 - 13.785: 95.8444% ( 2) 00:08:59.417 13.785 - 13.883: 95.8755% ( 2) 00:08:59.417 13.883 - 13.982: 95.9066% ( 2) 00:08:59.417 13.982 - 14.080: 95.9222% ( 1) 00:08:59.417 14.080 - 14.178: 95.9844% ( 4) 00:08:59.417 14.178 - 14.277: 96.0311% ( 3) 00:08:59.417 14.277 - 14.375: 96.0623% ( 2) 00:08:59.417 14.375 - 14.474: 96.0934% ( 2) 00:08:59.417 14.474 - 14.572: 96.1868% ( 6) 00:08:59.417 14.572 - 14.671: 96.3268% ( 9) 00:08:59.417 14.671 - 14.769: 96.4825% ( 10) 00:08:59.417 14.769 - 14.868: 96.5292% ( 3) 00:08:59.417 14.868 - 14.966: 96.6070% ( 5) 00:08:59.417 14.966 - 15.065: 96.7160% ( 7) 00:08:59.417 15.065 - 15.163: 96.7626% ( 3) 00:08:59.417 15.163 - 15.262: 96.8249% ( 4) 00:08:59.417 15.262 - 15.360: 96.9494% ( 8) 00:08:59.417 15.360 - 15.458: 96.9961% ( 3) 00:08:59.417 15.458 - 15.557: 97.0739% ( 5) 00:08:59.417 15.557 - 15.655: 97.1362% ( 4) 00:08:59.417 15.655 - 15.754: 97.2140% ( 5) 00:08:59.417 15.754 - 15.852: 97.2296% ( 1) 00:08:59.417 15.951 - 16.049: 97.2607% ( 2) 00:08:59.417 16.049 - 16.148: 97.2918% ( 2) 00:08:59.417 16.148 - 16.246: 97.3385% ( 3) 00:08:59.417 16.246 - 16.345: 97.3541% ( 1) 00:08:59.417 16.345 - 16.443: 97.4008% ( 3) 00:08:59.417 16.443 - 16.542: 97.4163% ( 1) 00:08:59.417 16.738 - 16.837: 97.4319% ( 1) 00:08:59.417 17.034 - 17.132: 97.4630% ( 2) 00:08:59.417 17.132 - 17.231: 97.4786% ( 1) 00:08:59.418 17.231 - 17.329: 97.4942% ( 1) 00:08:59.418 17.723 - 17.822: 97.5097% ( 1) 00:08:59.418 17.920 - 18.018: 97.5253% ( 1) 00:08:59.418 18.018 - 18.117: 97.5409% ( 1) 00:08:59.418 18.806 - 18.905: 97.5564% ( 1) 00:08:59.418 19.298 - 19.397: 97.5720% ( 1) 00:08:59.418 20.283 - 20.382: 97.6965% ( 8) 00:08:59.418 20.382 - 20.480: 97.7588% ( 4) 00:08:59.418 20.480 - 20.578: 97.7899% ( 2) 00:08:59.418 20.578 - 20.677: 97.8521% ( 4) 00:08:59.418 20.775 - 20.874: 97.8988% ( 3) 00:08:59.418 20.874 - 20.972: 97.9300% ( 2) 00:08:59.418 20.972 - 21.071: 97.9611% ( 2) 00:08:59.418 21.366 - 21.465: 97.9922% ( 2) 00:08:59.418 21.465 - 21.563: 98.0078% ( 1) 00:08:59.418 21.957 - 22.055: 98.0389% ( 2) 00:08:59.418 22.055 - 22.154: 98.1634% ( 8) 00:08:59.418 22.154 - 22.252: 98.4591% ( 19) 00:08:59.418 22.252 - 22.351: 98.6770% ( 14) 00:08:59.418 22.351 - 22.449: 98.7704% ( 6) 00:08:59.418 22.449 - 22.548: 98.8794% ( 7) 00:08:59.418 22.548 - 22.646: 98.9572% ( 5) 00:08:59.418 22.646 - 22.745: 99.0350% ( 5) 00:08:59.418 22.745 - 22.843: 99.0661% ( 2) 00:08:59.418 22.843 - 22.942: 99.1284% ( 4) 00:08:59.418 22.942 - 23.040: 99.1595% ( 2) 00:08:59.418 23.040 - 23.138: 99.2529% ( 6) 00:08:59.418 23.138 - 23.237: 99.2996% ( 3) 00:08:59.418 23.237 - 23.335: 99.3307% ( 2) 00:08:59.418 23.335 - 23.434: 99.3463% ( 1) 00:08:59.418 23.434 - 23.532: 99.3930% ( 3) 00:08:59.418 23.532 - 23.631: 99.4241% ( 2) 00:08:59.418 23.631 - 23.729: 99.4397% ( 1) 00:08:59.418 23.729 - 23.828: 99.4553% ( 1) 00:08:59.418 23.828 - 23.926: 99.4708% ( 1) 00:08:59.418 24.025 - 24.123: 99.4864% ( 1) 00:08:59.418 24.123 - 24.222: 99.5019% ( 1) 00:08:59.418 24.222 - 24.320: 99.5331% ( 2) 00:08:59.418 24.517 - 24.615: 99.5486% ( 1) 00:08:59.418 25.206 - 25.403: 99.5953% ( 3) 00:08:59.418 27.372 - 27.569: 99.6109% ( 1) 00:08:59.418 27.569 - 27.766: 99.6265% ( 1) 00:08:59.418 28.554 - 28.751: 99.6576% ( 2) 00:08:59.418 29.342 - 29.538: 99.6732% ( 1) 00:08:59.418 31.114 - 31.311: 99.6887% ( 1) 00:08:59.418 32.886 - 33.083: 99.7043% ( 1) 00:08:59.418 34.265 - 34.462: 99.7198% ( 1) 00:08:59.418 35.052 - 35.249: 99.7354% ( 1) 00:08:59.418 35.840 - 36.037: 99.7510% ( 1) 00:08:59.418 38.006 - 38.203: 99.7665% ( 1) 00:08:59.418 38.400 - 38.597: 99.7821% ( 1) 00:08:59.418 38.597 - 38.794: 99.8132% ( 2) 00:08:59.418 38.794 - 38.991: 99.8288% ( 1) 00:08:59.418 39.385 - 39.582: 99.8444% ( 1) 00:08:59.418 44.702 - 44.898: 99.8599% ( 1) 00:08:59.418 50.806 - 51.200: 99.8755% ( 1) 00:08:59.418 57.895 - 58.289: 99.8911% ( 1) 00:08:59.418 60.652 - 61.046: 99.9066% ( 1) 00:08:59.418 61.046 - 61.440: 99.9222% ( 1) 00:08:59.418 72.468 - 72.862: 99.9377% ( 1) 00:08:59.418 76.012 - 76.406: 99.9533% ( 1) 00:08:59.418 80.345 - 80.738: 99.9689% ( 1) 00:08:59.418 84.283 - 84.677: 99.9844% ( 1) 00:08:59.418 333.982 - 335.557: 100.0000% ( 1) 00:08:59.418 00:08:59.418 ************************************ 00:08:59.418 END TEST nvme_overhead 00:08:59.418 ************************************ 00:08:59.418 00:08:59.418 real 0m1.220s 00:08:59.418 user 0m1.069s 00:08:59.418 sys 0m0.103s 00:08:59.418 12:12:30 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.418 12:12:30 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:59.418 12:12:30 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:59.418 12:12:30 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:59.418 12:12:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.418 12:12:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.418 ************************************ 00:08:59.418 START TEST nvme_arbitration 00:08:59.418 ************************************ 00:08:59.418 12:12:30 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:02.717 Initializing NVMe Controllers 00:09:02.717 Attached to 0000:00:10.0 00:09:02.717 Attached to 0000:00:11.0 00:09:02.717 Attached to 0000:00:13.0 00:09:02.717 Attached to 0000:00:12.0 00:09:02.717 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:02.717 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:02.717 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:02.717 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:02.717 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:02.717 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:02.717 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:02.717 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:02.717 Initialization complete. Launching workers. 00:09:02.717 Starting thread on core 1 with urgent priority queue 00:09:02.717 Starting thread on core 2 with urgent priority queue 00:09:02.717 Starting thread on core 3 with urgent priority queue 00:09:02.717 Starting thread on core 0 with urgent priority queue 00:09:02.717 QEMU NVMe Ctrl (12340 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:09:02.717 QEMU NVMe Ctrl (12342 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:09:02.717 QEMU NVMe Ctrl (12341 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:09:02.717 QEMU NVMe Ctrl (12342 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:09:02.717 QEMU NVMe Ctrl (12343 ) core 2: 746.67 IO/s 133.93 secs/100000 ios 00:09:02.717 QEMU NVMe Ctrl (12342 ) core 3: 917.33 IO/s 109.01 secs/100000 ios 00:09:02.717 ======================================================== 00:09:02.717 00:09:02.717 00:09:02.717 real 0m3.333s 00:09:02.717 user 0m9.268s 00:09:02.717 sys 0m0.133s 00:09:02.717 12:12:33 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.717 12:12:33 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:02.717 ************************************ 00:09:02.717 END TEST nvme_arbitration 00:09:02.717 ************************************ 00:09:02.717 12:12:33 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:02.717 12:12:33 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.717 12:12:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.717 12:12:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.717 ************************************ 00:09:02.717 START TEST nvme_single_aen 00:09:02.717 ************************************ 00:09:02.718 12:12:33 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:02.976 Asynchronous Event Request test 00:09:02.976 Attached to 0000:00:10.0 00:09:02.976 Attached to 0000:00:11.0 00:09:02.976 Attached to 0000:00:13.0 00:09:02.976 Attached to 0000:00:12.0 00:09:02.976 Reset controller to setup AER completions for this process 00:09:02.976 Registering asynchronous event callbacks... 00:09:02.976 Getting orig temperature thresholds of all controllers 00:09:02.976 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.976 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.976 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.976 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.976 Setting all controllers temperature threshold low to trigger AER 00:09:02.976 Waiting for all controllers temperature threshold to be set lower 00:09:02.976 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.976 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:02.976 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.976 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:02.976 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.976 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:02.976 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.976 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:02.976 Waiting for all controllers to trigger AER and reset threshold 00:09:02.976 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.976 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.976 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.976 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.976 Cleaning up... 00:09:02.976 00:09:02.976 real 0m0.230s 00:09:02.976 user 0m0.095s 00:09:02.976 sys 0m0.095s 00:09:02.976 ************************************ 00:09:02.976 END TEST nvme_single_aen 00:09:02.976 ************************************ 00:09:02.977 12:12:33 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.977 12:12:33 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:02.977 12:12:33 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:02.977 12:12:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.977 12:12:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.977 12:12:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.977 ************************************ 00:09:02.977 START TEST nvme_doorbell_aers 00:09:02.977 ************************************ 00:09:02.977 12:12:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:09:02.977 12:12:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:02.977 12:12:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:02.977 12:12:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:02.977 12:12:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:02.977 12:12:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:02.977 12:12:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:09:02.977 12:12:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:02.977 12:12:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:02.977 12:12:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:02.977 12:12:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:02.977 12:12:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:02.977 12:12:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:02.977 12:12:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:03.235 [2024-12-05 12:12:34.036183] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:13.255 Executing: test_write_invalid_db 00:09:13.255 Waiting for AER completion... 00:09:13.255 Failure: test_write_invalid_db 00:09:13.255 00:09:13.255 Executing: test_invalid_db_write_overflow_sq 00:09:13.255 Waiting for AER completion... 00:09:13.255 Failure: test_invalid_db_write_overflow_sq 00:09:13.255 00:09:13.255 Executing: test_invalid_db_write_overflow_cq 00:09:13.255 Waiting for AER completion... 00:09:13.255 Failure: test_invalid_db_write_overflow_cq 00:09:13.255 00:09:13.255 12:12:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:13.255 12:12:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:13.255 [2024-12-05 12:12:44.070742] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:23.296 Executing: test_write_invalid_db 00:09:23.296 Waiting for AER completion... 00:09:23.296 Failure: test_write_invalid_db 00:09:23.296 00:09:23.296 Executing: test_invalid_db_write_overflow_sq 00:09:23.296 Waiting for AER completion... 00:09:23.296 Failure: test_invalid_db_write_overflow_sq 00:09:23.296 00:09:23.296 Executing: test_invalid_db_write_overflow_cq 00:09:23.296 Waiting for AER completion... 00:09:23.296 Failure: test_invalid_db_write_overflow_cq 00:09:23.296 00:09:23.296 12:12:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:23.296 12:12:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:23.296 [2024-12-05 12:12:54.096995] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:33.350 Executing: test_write_invalid_db 00:09:33.350 Waiting for AER completion... 00:09:33.350 Failure: test_write_invalid_db 00:09:33.350 00:09:33.351 Executing: test_invalid_db_write_overflow_sq 00:09:33.351 Waiting for AER completion... 00:09:33.351 Failure: test_invalid_db_write_overflow_sq 00:09:33.351 00:09:33.351 Executing: test_invalid_db_write_overflow_cq 00:09:33.351 Waiting for AER completion... 00:09:33.351 Failure: test_invalid_db_write_overflow_cq 00:09:33.351 00:09:33.351 12:13:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:33.351 12:13:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:33.351 [2024-12-05 12:13:04.139007] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:43.315 Executing: test_write_invalid_db 00:09:43.315 Waiting for AER completion... 00:09:43.315 Failure: test_write_invalid_db 00:09:43.315 00:09:43.315 Executing: test_invalid_db_write_overflow_sq 00:09:43.315 Waiting for AER completion... 00:09:43.315 Failure: test_invalid_db_write_overflow_sq 00:09:43.316 00:09:43.316 Executing: test_invalid_db_write_overflow_cq 00:09:43.316 Waiting for AER completion... 00:09:43.316 Failure: test_invalid_db_write_overflow_cq 00:09:43.316 00:09:43.316 00:09:43.316 real 0m40.189s 00:09:43.316 user 0m34.023s 00:09:43.316 sys 0m5.761s 00:09:43.316 12:13:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.316 12:13:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:43.316 ************************************ 00:09:43.316 END TEST nvme_doorbell_aers 00:09:43.316 ************************************ 00:09:43.316 12:13:13 nvme -- nvme/nvme.sh@97 -- # uname 00:09:43.316 12:13:13 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:43.316 12:13:13 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:43.316 12:13:13 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:43.316 12:13:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.316 12:13:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:43.316 ************************************ 00:09:43.316 START TEST nvme_multi_aen 00:09:43.316 ************************************ 00:09:43.316 12:13:13 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:43.316 [2024-12-05 12:13:14.166919] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:43.316 [2024-12-05 12:13:14.167002] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:43.316 [2024-12-05 12:13:14.167016] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:43.316 [2024-12-05 12:13:14.168816] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:43.316 [2024-12-05 12:13:14.168857] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:43.316 [2024-12-05 12:13:14.168870] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:43.316 [2024-12-05 12:13:14.169803] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:43.316 [2024-12-05 12:13:14.169837] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:43.316 [2024-12-05 12:13:14.169850] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:43.316 [2024-12-05 12:13:14.170760] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:43.316 [2024-12-05 12:13:14.170792] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:43.316 [2024-12-05 12:13:14.170805] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63461) is not found. Dropping the request. 00:09:43.316 Child process pid: 63987 00:09:43.574 [Child] Asynchronous Event Request test 00:09:43.574 [Child] Attached to 0000:00:10.0 00:09:43.574 [Child] Attached to 0000:00:11.0 00:09:43.574 [Child] Attached to 0000:00:13.0 00:09:43.574 [Child] Attached to 0000:00:12.0 00:09:43.574 [Child] Registering asynchronous event callbacks... 00:09:43.574 [Child] Getting orig temperature thresholds of all controllers 00:09:43.574 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.574 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.574 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.574 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.574 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:43.574 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.574 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.574 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.574 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.574 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.574 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.574 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.574 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.574 [Child] Cleaning up... 00:09:43.574 Asynchronous Event Request test 00:09:43.574 Attached to 0000:00:10.0 00:09:43.574 Attached to 0000:00:11.0 00:09:43.574 Attached to 0000:00:13.0 00:09:43.574 Attached to 0000:00:12.0 00:09:43.574 Reset controller to setup AER completions for this process 00:09:43.574 Registering asynchronous event callbacks... 00:09:43.574 Getting orig temperature thresholds of all controllers 00:09:43.574 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.574 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.574 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.574 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.574 Setting all controllers temperature threshold low to trigger AER 00:09:43.574 Waiting for all controllers temperature threshold to be set lower 00:09:43.574 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.574 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:43.574 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.574 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:43.574 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.574 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:43.574 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.574 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:43.574 Waiting for all controllers to trigger AER and reset threshold 00:09:43.574 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.574 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.574 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.574 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.574 Cleaning up... 00:09:43.832 00:09:43.832 real 0m0.459s 00:09:43.832 user 0m0.143s 00:09:43.832 sys 0m0.204s 00:09:43.832 12:13:14 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.832 ************************************ 00:09:43.832 END TEST nvme_multi_aen 00:09:43.832 ************************************ 00:09:43.832 12:13:14 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:43.832 12:13:14 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:43.832 12:13:14 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:43.832 12:13:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.832 12:13:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:43.832 ************************************ 00:09:43.832 START TEST nvme_startup 00:09:43.832 ************************************ 00:09:43.832 12:13:14 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:43.832 Initializing NVMe Controllers 00:09:43.832 Attached to 0000:00:10.0 00:09:43.832 Attached to 0000:00:11.0 00:09:43.832 Attached to 0000:00:13.0 00:09:43.832 Attached to 0000:00:12.0 00:09:43.832 Initialization complete. 00:09:43.832 Time used:151671.234 (us). 00:09:43.832 00:09:43.832 real 0m0.215s 00:09:43.832 user 0m0.072s 00:09:43.832 sys 0m0.099s 00:09:43.832 12:13:14 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.832 12:13:14 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:43.832 ************************************ 00:09:43.832 END TEST nvme_startup 00:09:43.832 ************************************ 00:09:44.090 12:13:14 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:44.090 12:13:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.090 12:13:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.090 12:13:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.090 ************************************ 00:09:44.090 START TEST nvme_multi_secondary 00:09:44.090 ************************************ 00:09:44.090 12:13:14 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:44.090 12:13:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64043 00:09:44.090 12:13:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64044 00:09:44.090 12:13:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:44.090 12:13:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:44.090 12:13:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:47.367 Initializing NVMe Controllers 00:09:47.367 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:47.367 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:47.367 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:47.367 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:47.367 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:47.367 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:47.367 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:47.367 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:47.367 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:47.367 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:47.367 Initialization complete. Launching workers. 00:09:47.367 ======================================================== 00:09:47.367 Latency(us) 00:09:47.367 Device Information : IOPS MiB/s Average min max 00:09:47.368 PCIE (0000:00:10.0) NSID 1 from core 1: 7129.62 27.85 2242.81 1157.08 5943.35 00:09:47.368 PCIE (0000:00:11.0) NSID 1 from core 1: 7129.62 27.85 2243.76 1236.77 5709.23 00:09:47.368 PCIE (0000:00:13.0) NSID 1 from core 1: 7129.62 27.85 2243.72 1120.70 6097.65 00:09:47.368 PCIE (0000:00:12.0) NSID 1 from core 1: 7129.62 27.85 2243.68 1104.01 5655.14 00:09:47.368 PCIE (0000:00:12.0) NSID 2 from core 1: 7129.62 27.85 2243.74 1095.88 5911.81 00:09:47.368 PCIE (0000:00:12.0) NSID 3 from core 1: 7129.62 27.85 2243.69 1010.49 6010.48 00:09:47.368 ======================================================== 00:09:47.368 Total : 42777.70 167.10 2243.57 1010.49 6097.65 00:09:47.368 00:09:47.368 Initializing NVMe Controllers 00:09:47.368 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:47.368 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:47.368 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:47.368 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:47.368 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:47.368 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:47.368 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:47.368 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:47.368 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:47.368 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:47.368 Initialization complete. Launching workers. 00:09:47.368 ======================================================== 00:09:47.368 Latency(us) 00:09:47.368 Device Information : IOPS MiB/s Average min max 00:09:47.368 PCIE (0000:00:10.0) NSID 1 from core 2: 2659.47 10.39 6014.93 928.78 20904.64 00:09:47.368 PCIE (0000:00:11.0) NSID 1 from core 2: 2659.47 10.39 6015.41 1028.92 19578.78 00:09:47.368 PCIE (0000:00:13.0) NSID 1 from core 2: 2659.47 10.39 6015.79 1016.29 17442.57 00:09:47.368 PCIE (0000:00:12.0) NSID 1 from core 2: 2659.47 10.39 6015.51 1040.15 21538.79 00:09:47.368 PCIE (0000:00:12.0) NSID 2 from core 2: 2659.47 10.39 6015.90 1112.89 20728.36 00:09:47.368 PCIE (0000:00:12.0) NSID 3 from core 2: 2659.47 10.39 6015.84 943.99 17107.16 00:09:47.368 ======================================================== 00:09:47.368 Total : 15956.83 62.33 6015.56 928.78 21538.79 00:09:47.368 00:09:47.368 12:13:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64043 00:09:49.266 Initializing NVMe Controllers 00:09:49.266 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:49.266 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:49.266 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:49.266 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:49.266 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:49.266 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:49.266 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:49.266 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:49.266 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:49.266 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:49.266 Initialization complete. Launching workers. 00:09:49.266 ======================================================== 00:09:49.266 Latency(us) 00:09:49.266 Device Information : IOPS MiB/s Average min max 00:09:49.266 PCIE (0000:00:10.0) NSID 1 from core 0: 10297.87 40.23 1552.52 677.49 5898.21 00:09:49.266 PCIE (0000:00:11.0) NSID 1 from core 0: 10301.07 40.24 1552.92 696.36 6192.28 00:09:49.266 PCIE (0000:00:13.0) NSID 1 from core 0: 10297.87 40.23 1553.42 683.31 6855.60 00:09:49.266 PCIE (0000:00:12.0) NSID 1 from core 0: 10297.87 40.23 1553.44 697.83 6816.24 00:09:49.266 PCIE (0000:00:12.0) NSID 2 from core 0: 10297.87 40.23 1553.46 703.68 6349.96 00:09:49.266 PCIE (0000:00:12.0) NSID 3 from core 0: 10297.87 40.23 1553.48 698.18 6267.56 00:09:49.266 ======================================================== 00:09:49.266 Total : 61790.44 241.37 1553.21 677.49 6855.60 00:09:49.266 00:09:49.266 12:13:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64044 00:09:49.266 12:13:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64113 00:09:49.266 12:13:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:49.266 12:13:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64114 00:09:49.266 12:13:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:49.266 12:13:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:52.583 Initializing NVMe Controllers 00:09:52.583 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:52.583 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:52.583 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:52.583 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:52.583 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:52.583 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:52.583 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:52.583 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:52.583 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:52.583 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:52.583 Initialization complete. Launching workers. 00:09:52.583 ======================================================== 00:09:52.583 Latency(us) 00:09:52.583 Device Information : IOPS MiB/s Average min max 00:09:52.583 PCIE (0000:00:10.0) NSID 1 from core 1: 5696.45 22.25 2807.26 929.61 11711.03 00:09:52.583 PCIE (0000:00:11.0) NSID 1 from core 1: 5696.45 22.25 2808.47 925.08 12459.08 00:09:52.583 PCIE (0000:00:13.0) NSID 1 from core 1: 5696.45 22.25 2808.46 931.77 12229.41 00:09:52.583 PCIE (0000:00:12.0) NSID 1 from core 1: 5696.45 22.25 2808.48 933.14 12898.84 00:09:52.583 PCIE (0000:00:12.0) NSID 2 from core 1: 5696.45 22.25 2808.51 934.01 11933.90 00:09:52.583 PCIE (0000:00:12.0) NSID 3 from core 1: 5696.45 22.25 2808.46 918.27 11618.64 00:09:52.583 ======================================================== 00:09:52.583 Total : 34178.69 133.51 2808.27 918.27 12898.84 00:09:52.583 00:09:52.583 Initializing NVMe Controllers 00:09:52.583 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:52.583 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:52.583 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:52.583 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:52.583 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:52.583 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:52.583 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:52.583 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:52.583 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:52.583 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:52.583 Initialization complete. Launching workers. 00:09:52.583 ======================================================== 00:09:52.583 Latency(us) 00:09:52.583 Device Information : IOPS MiB/s Average min max 00:09:52.583 PCIE (0000:00:10.0) NSID 1 from core 0: 5652.91 22.08 2828.86 772.99 12004.87 00:09:52.583 PCIE (0000:00:11.0) NSID 1 from core 0: 5652.91 22.08 2829.84 786.01 11521.12 00:09:52.583 PCIE (0000:00:13.0) NSID 1 from core 0: 5652.91 22.08 2829.77 689.58 12501.76 00:09:52.583 PCIE (0000:00:12.0) NSID 1 from core 0: 5652.91 22.08 2829.70 670.58 12873.66 00:09:52.583 PCIE (0000:00:12.0) NSID 2 from core 0: 5652.91 22.08 2829.62 640.17 11414.18 00:09:52.583 PCIE (0000:00:12.0) NSID 3 from core 0: 5652.91 22.08 2829.55 612.76 11595.40 00:09:52.583 ======================================================== 00:09:52.583 Total : 33917.47 132.49 2829.56 612.76 12873.66 00:09:52.583 00:09:55.110 Initializing NVMe Controllers 00:09:55.110 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:55.110 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:55.110 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:55.110 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:55.110 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:55.110 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:55.110 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:55.110 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:55.110 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:55.110 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:55.110 Initialization complete. Launching workers. 00:09:55.110 ======================================================== 00:09:55.110 Latency(us) 00:09:55.110 Device Information : IOPS MiB/s Average min max 00:09:55.110 PCIE (0000:00:10.0) NSID 1 from core 2: 2901.67 11.33 5511.87 814.67 32328.15 00:09:55.110 PCIE (0000:00:11.0) NSID 1 from core 2: 2901.67 11.33 5513.62 837.19 30167.78 00:09:55.110 PCIE (0000:00:13.0) NSID 1 from core 2: 2901.67 11.33 5513.53 832.58 34906.95 00:09:55.110 PCIE (0000:00:12.0) NSID 1 from core 2: 2901.67 11.33 5512.89 842.00 29899.30 00:09:55.110 PCIE (0000:00:12.0) NSID 2 from core 2: 2901.67 11.33 5513.35 821.05 34371.49 00:09:55.110 PCIE (0000:00:12.0) NSID 3 from core 2: 2901.67 11.33 5513.24 826.47 26802.22 00:09:55.110 ======================================================== 00:09:55.110 Total : 17410.04 68.01 5513.08 814.67 34906.95 00:09:55.110 00:09:55.110 12:13:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64113 00:09:55.110 12:13:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64114 00:09:55.110 00:09:55.110 real 0m10.696s 00:09:55.110 user 0m18.390s 00:09:55.110 sys 0m0.662s 00:09:55.110 12:13:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.110 12:13:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:55.110 ************************************ 00:09:55.110 END TEST nvme_multi_secondary 00:09:55.110 ************************************ 00:09:55.110 12:13:25 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:55.110 12:13:25 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:55.110 12:13:25 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/63052 ]] 00:09:55.110 12:13:25 nvme -- common/autotest_common.sh@1094 -- # kill 63052 00:09:55.110 12:13:25 nvme -- common/autotest_common.sh@1095 -- # wait 63052 00:09:55.110 [2024-12-05 12:13:25.457302] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.457390] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.457422] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.457440] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.460078] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.460139] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.460157] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.460176] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.462754] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.462807] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.462825] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.462843] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.465127] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.465165] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.465177] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 [2024-12-05 12:13:25.465189] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63986) is not found. Dropping the request. 00:09:55.110 12:13:25 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:55.110 12:13:25 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:55.110 12:13:25 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:55.110 12:13:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.110 12:13:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.110 12:13:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:55.110 ************************************ 00:09:55.110 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:55.110 ************************************ 00:09:55.110 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:55.110 * Looking for test storage... 00:09:55.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:55.110 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:55.110 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:55.110 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:55.110 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:55.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.111 --rc genhtml_branch_coverage=1 00:09:55.111 --rc genhtml_function_coverage=1 00:09:55.111 --rc genhtml_legend=1 00:09:55.111 --rc geninfo_all_blocks=1 00:09:55.111 --rc geninfo_unexecuted_blocks=1 00:09:55.111 00:09:55.111 ' 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:55.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.111 --rc genhtml_branch_coverage=1 00:09:55.111 --rc genhtml_function_coverage=1 00:09:55.111 --rc genhtml_legend=1 00:09:55.111 --rc geninfo_all_blocks=1 00:09:55.111 --rc geninfo_unexecuted_blocks=1 00:09:55.111 00:09:55.111 ' 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:55.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.111 --rc genhtml_branch_coverage=1 00:09:55.111 --rc genhtml_function_coverage=1 00:09:55.111 --rc genhtml_legend=1 00:09:55.111 --rc geninfo_all_blocks=1 00:09:55.111 --rc geninfo_unexecuted_blocks=1 00:09:55.111 00:09:55.111 ' 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:55.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.111 --rc genhtml_branch_coverage=1 00:09:55.111 --rc genhtml_function_coverage=1 00:09:55.111 --rc genhtml_legend=1 00:09:55.111 --rc geninfo_all_blocks=1 00:09:55.111 --rc geninfo_unexecuted_blocks=1 00:09:55.111 00:09:55.111 ' 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:55.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64272 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64272 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64272 ']' 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.111 12:13:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:55.111 [2024-12-05 12:13:25.887110] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:09:55.111 [2024-12-05 12:13:25.887223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64272 ] 00:09:55.368 [2024-12-05 12:13:26.054144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.368 [2024-12-05 12:13:26.175537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.368 [2024-12-05 12:13:26.175765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.368 [2024-12-05 12:13:26.176182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.368 [2024-12-05 12:13:26.176210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:56.302 nvme0n1 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_GaEQf.txt 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:56.302 true 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733400806 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64295 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:56.302 12:13:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:58.204 12:13:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:58.204 12:13:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.204 12:13:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:58.204 [2024-12-05 12:13:28.932270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:58.204 [2024-12-05 12:13:28.932564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:58.204 [2024-12-05 12:13:28.932591] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:58.204 [2024-12-05 12:13:28.932605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:58.204 [2024-12-05 12:13:28.934980] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:58.204 12:13:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.204 12:13:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64295 00:09:58.204 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64295 00:09:58.204 12:13:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64295 00:09:58.204 12:13:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:58.204 12:13:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:58.204 12:13:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:58.204 12:13:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.204 12:13:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:58.204 12:13:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.204 12:13:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:58.204 12:13:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_GaEQf.txt 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_GaEQf.txt 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64272 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64272 ']' 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64272 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64272 00:09:58.204 killing process with pid 64272 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64272' 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64272 00:09:58.204 12:13:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64272 00:10:00.169 12:13:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:00.169 12:13:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:00.170 00:10:00.170 real 0m4.997s 00:10:00.170 user 0m17.625s 00:10:00.170 sys 0m0.576s 00:10:00.170 12:13:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.170 12:13:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:00.170 ************************************ 00:10:00.170 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:00.170 ************************************ 00:10:00.170 12:13:30 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:00.170 12:13:30 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:00.170 12:13:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:00.170 12:13:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.170 12:13:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:00.170 ************************************ 00:10:00.170 START TEST nvme_fio 00:10:00.170 ************************************ 00:10:00.170 12:13:30 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:10:00.170 12:13:30 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:00.170 12:13:30 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:00.170 12:13:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:00.170 12:13:30 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:00.170 12:13:30 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:10:00.170 12:13:30 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:00.170 12:13:30 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:00.170 12:13:30 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:00.170 12:13:30 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:00.170 12:13:30 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:00.170 12:13:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:00.170 12:13:30 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:00.170 12:13:30 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:00.170 12:13:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:00.170 12:13:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:00.170 12:13:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:00.170 12:13:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:00.428 12:13:31 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:00.428 12:13:31 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:00.428 12:13:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:00.686 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:00.686 fio-3.35 00:10:00.686 Starting 1 thread 00:10:05.945 00:10:05.945 test: (groupid=0, jobs=1): err= 0: pid=64439: Thu Dec 5 12:13:36 2024 00:10:05.945 read: IOPS=19.9k, BW=77.6MiB/s (81.3MB/s)(155MiB/2001msec) 00:10:05.945 slat (nsec): min=3975, max=65327, avg=6136.61, stdev=2604.50 00:10:05.945 clat (usec): min=240, max=14080, avg=3207.07, stdev=951.12 00:10:05.945 lat (usec): min=245, max=14144, avg=3213.20, stdev=952.60 00:10:05.945 clat percentiles (usec): 00:10:05.945 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:10:05.945 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 2999], 00:10:05.945 | 70.00th=[ 3163], 80.00th=[ 3556], 90.00th=[ 4490], 95.00th=[ 5342], 00:10:05.945 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 8160], 99.95th=[11076], 00:10:05.945 | 99.99th=[13698] 00:10:05.945 bw ( KiB/s): min=76560, max=81568, per=98.78%, avg=78469.33, stdev=2707.56, samples=3 00:10:05.945 iops : min=19140, max=20392, avg=19617.33, stdev=676.89, samples=3 00:10:05.945 write: IOPS=19.8k, BW=77.4MiB/s (81.1MB/s)(155MiB/2001msec); 0 zone resets 00:10:05.945 slat (nsec): min=4122, max=83258, avg=6430.13, stdev=2640.65 00:10:05.945 clat (usec): min=200, max=13830, avg=3218.25, stdev=955.48 00:10:05.945 lat (usec): min=205, max=13845, avg=3224.68, stdev=956.96 00:10:05.945 clat percentiles (usec): 00:10:05.945 | 1.00th=[ 2180], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2638], 00:10:05.945 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 2999], 00:10:05.945 | 70.00th=[ 3195], 80.00th=[ 3556], 90.00th=[ 4490], 95.00th=[ 5342], 00:10:05.945 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 8586], 99.95th=[11338], 00:10:05.945 | 99.99th=[13435] 00:10:05.945 bw ( KiB/s): min=76384, max=81744, per=99.14%, avg=78562.67, stdev=2817.16, samples=3 00:10:05.945 iops : min=19096, max=20436, avg=19640.67, stdev=704.29, samples=3 00:10:05.945 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:10:05.945 lat (msec) : 2=0.40%, 4=85.03%, 10=14.44%, 20=0.07% 00:10:05.945 cpu : usr=99.30%, sys=0.00%, ctx=3, majf=0, minf=607 00:10:05.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:05.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.945 issued rwts: total=39740,39640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.945 00:10:05.945 Run status group 0 (all jobs): 00:10:05.945 READ: bw=77.6MiB/s (81.3MB/s), 77.6MiB/s-77.6MiB/s (81.3MB/s-81.3MB/s), io=155MiB (163MB), run=2001-2001msec 00:10:05.945 WRITE: bw=77.4MiB/s (81.1MB/s), 77.4MiB/s-77.4MiB/s (81.1MB/s-81.1MB/s), io=155MiB (162MB), run=2001-2001msec 00:10:06.207 ----------------------------------------------------- 00:10:06.207 Suppressions used: 00:10:06.207 count bytes template 00:10:06.207 1 32 /usr/src/fio/parse.c 00:10:06.207 1 8 libtcmalloc_minimal.so 00:10:06.207 ----------------------------------------------------- 00:10:06.207 00:10:06.207 12:13:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:06.207 12:13:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:06.207 12:13:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:06.207 12:13:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:06.466 12:13:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:06.466 12:13:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:06.723 12:13:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:06.723 12:13:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:06.723 12:13:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:06.723 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:06.723 fio-3.35 00:10:06.723 Starting 1 thread 00:10:11.985 00:10:11.985 test: (groupid=0, jobs=1): err= 0: pid=64500: Thu Dec 5 12:13:42 2024 00:10:11.985 read: IOPS=19.8k, BW=77.4MiB/s (81.2MB/s)(155MiB/2001msec) 00:10:11.985 slat (nsec): min=3948, max=49532, avg=6082.28, stdev=2575.09 00:10:11.985 clat (usec): min=397, max=8026, avg=3207.00, stdev=957.95 00:10:11.985 lat (usec): min=403, max=8032, avg=3213.09, stdev=959.52 00:10:11.985 clat percentiles (usec): 00:10:11.985 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2606], 00:10:11.985 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 2999], 00:10:11.985 | 70.00th=[ 3163], 80.00th=[ 3490], 90.00th=[ 4359], 95.00th=[ 5669], 00:10:11.985 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 7308], 99.95th=[ 7439], 00:10:11.985 | 99.99th=[ 7832] 00:10:11.985 bw ( KiB/s): min=75472, max=82048, per=100.00%, avg=79360.00, stdev=3448.32, samples=3 00:10:11.985 iops : min=18868, max=20512, avg=19840.00, stdev=862.08, samples=3 00:10:11.985 write: IOPS=19.8k, BW=77.2MiB/s (81.0MB/s)(155MiB/2001msec); 0 zone resets 00:10:11.985 slat (nsec): min=4120, max=78605, avg=6385.84, stdev=2692.67 00:10:11.985 clat (usec): min=439, max=7911, avg=3231.85, stdev=969.77 00:10:11.985 lat (usec): min=445, max=7918, avg=3238.24, stdev=971.36 00:10:11.985 clat percentiles (usec): 00:10:11.985 | 1.00th=[ 2212], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:10:11.985 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 3032], 00:10:11.985 | 70.00th=[ 3195], 80.00th=[ 3523], 90.00th=[ 4359], 95.00th=[ 5800], 00:10:11.985 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 7308], 99.95th=[ 7373], 00:10:11.985 | 99.99th=[ 7635] 00:10:11.985 bw ( KiB/s): min=75344, max=81968, per=100.00%, avg=79397.33, stdev=3552.19, samples=3 00:10:11.985 iops : min=18836, max=20492, avg=19849.33, stdev=888.05, samples=3 00:10:11.985 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:10:11.985 lat (msec) : 2=0.32%, 4=87.02%, 10=12.62% 00:10:11.985 cpu : usr=99.30%, sys=0.05%, ctx=2, majf=0, minf=606 00:10:11.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:11.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.985 issued rwts: total=39665,39565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.985 00:10:11.985 Run status group 0 (all jobs): 00:10:11.985 READ: bw=77.4MiB/s (81.2MB/s), 77.4MiB/s-77.4MiB/s (81.2MB/s-81.2MB/s), io=155MiB (162MB), run=2001-2001msec 00:10:11.985 WRITE: bw=77.2MiB/s (81.0MB/s), 77.2MiB/s-77.2MiB/s (81.0MB/s-81.0MB/s), io=155MiB (162MB), run=2001-2001msec 00:10:12.242 ----------------------------------------------------- 00:10:12.243 Suppressions used: 00:10:12.243 count bytes template 00:10:12.243 1 32 /usr/src/fio/parse.c 00:10:12.243 1 8 libtcmalloc_minimal.so 00:10:12.243 ----------------------------------------------------- 00:10:12.243 00:10:12.243 12:13:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:12.243 12:13:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:12.243 12:13:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:12.243 12:13:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:12.499 12:13:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:12.499 12:13:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:12.756 12:13:43 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:12.756 12:13:43 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:12.756 12:13:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:13.013 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:13.014 fio-3.35 00:10:13.014 Starting 1 thread 00:10:19.608 00:10:19.608 test: (groupid=0, jobs=1): err= 0: pid=64561: Thu Dec 5 12:13:49 2024 00:10:19.608 read: IOPS=20.3k, BW=79.2MiB/s (83.1MB/s)(159MiB/2001msec) 00:10:19.608 slat (usec): min=4, max=117, avg= 6.00, stdev= 2.39 00:10:19.608 clat (usec): min=638, max=7881, avg=3143.20, stdev=903.31 00:10:19.608 lat (usec): min=651, max=7886, avg=3149.21, stdev=904.61 00:10:19.608 clat percentiles (usec): 00:10:19.608 | 1.00th=[ 2180], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2573], 00:10:19.608 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2999], 00:10:19.608 | 70.00th=[ 3163], 80.00th=[ 3458], 90.00th=[ 4228], 95.00th=[ 5342], 00:10:19.608 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7439], 00:10:19.608 | 99.99th=[ 7701] 00:10:19.608 bw ( KiB/s): min=77808, max=83640, per=100.00%, avg=81592.00, stdev=3280.75, samples=3 00:10:19.608 iops : min=19452, max=20910, avg=20398.00, stdev=820.19, samples=3 00:10:19.608 write: IOPS=20.2k, BW=79.0MiB/s (82.9MB/s)(158MiB/2001msec); 0 zone resets 00:10:19.608 slat (usec): min=4, max=614, avg= 6.31, stdev= 3.92 00:10:19.608 clat (usec): min=793, max=8035, avg=3152.48, stdev=904.11 00:10:19.608 lat (usec): min=806, max=8041, avg=3158.79, stdev=905.47 00:10:19.609 clat percentiles (usec): 00:10:19.609 | 1.00th=[ 2180], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2573], 00:10:19.609 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2999], 00:10:19.609 | 70.00th=[ 3163], 80.00th=[ 3458], 90.00th=[ 4228], 95.00th=[ 5342], 00:10:19.609 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7308], 99.95th=[ 7373], 00:10:19.609 | 99.99th=[ 7767] 00:10:19.609 bw ( KiB/s): min=77896, max=83648, per=100.00%, avg=81720.00, stdev=3311.72, samples=3 00:10:19.609 iops : min=19474, max=20912, avg=20430.00, stdev=827.93, samples=3 00:10:19.609 lat (usec) : 750=0.01%, 1000=0.01% 00:10:19.609 lat (msec) : 2=0.28%, 4=87.86%, 10=11.85% 00:10:19.609 cpu : usr=98.90%, sys=0.15%, ctx=6, majf=0, minf=606 00:10:19.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:19.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:19.609 issued rwts: total=40579,40481,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:19.609 00:10:19.609 Run status group 0 (all jobs): 00:10:19.609 READ: bw=79.2MiB/s (83.1MB/s), 79.2MiB/s-79.2MiB/s (83.1MB/s-83.1MB/s), io=159MiB (166MB), run=2001-2001msec 00:10:19.609 WRITE: bw=79.0MiB/s (82.9MB/s), 79.0MiB/s-79.0MiB/s (82.9MB/s-82.9MB/s), io=158MiB (166MB), run=2001-2001msec 00:10:19.609 ----------------------------------------------------- 00:10:19.609 Suppressions used: 00:10:19.609 count bytes template 00:10:19.609 1 32 /usr/src/fio/parse.c 00:10:19.609 1 8 libtcmalloc_minimal.so 00:10:19.609 ----------------------------------------------------- 00:10:19.609 00:10:19.609 12:13:49 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:19.609 12:13:49 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:19.609 12:13:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:19.609 12:13:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:19.609 12:13:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:19.609 12:13:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:19.609 12:13:50 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:19.609 12:13:50 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:19.609 12:13:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:19.868 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:19.868 fio-3.35 00:10:19.868 Starting 1 thread 00:10:29.833 00:10:29.833 test: (groupid=0, jobs=1): err= 0: pid=64626: Thu Dec 5 12:13:59 2024 00:10:29.833 read: IOPS=20.3k, BW=79.4MiB/s (83.3MB/s)(159MiB/2001msec) 00:10:29.833 slat (nsec): min=3998, max=82867, avg=5936.23, stdev=2226.04 00:10:29.833 clat (usec): min=232, max=8890, avg=3131.68, stdev=854.97 00:10:29.833 lat (usec): min=237, max=8940, avg=3137.62, stdev=856.19 00:10:29.833 clat percentiles (usec): 00:10:29.833 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2573], 00:10:29.833 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2999], 00:10:29.833 | 70.00th=[ 3163], 80.00th=[ 3425], 90.00th=[ 4178], 95.00th=[ 5145], 00:10:29.833 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 6980], 99.95th=[ 7504], 00:10:29.833 | 99.99th=[ 8586] 00:10:29.833 bw ( KiB/s): min=78192, max=81992, per=98.11%, avg=79805.33, stdev=1963.81, samples=3 00:10:29.833 iops : min=19548, max=20498, avg=19951.33, stdev=490.95, samples=3 00:10:29.833 write: IOPS=20.3k, BW=79.3MiB/s (83.1MB/s)(159MiB/2001msec); 0 zone resets 00:10:29.833 slat (nsec): min=4197, max=84917, avg=6235.49, stdev=2298.63 00:10:29.833 clat (usec): min=195, max=8623, avg=3143.04, stdev=846.97 00:10:29.833 lat (usec): min=200, max=8637, avg=3149.27, stdev=848.19 00:10:29.833 clat percentiles (usec): 00:10:29.833 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2606], 00:10:29.833 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 3032], 00:10:29.833 | 70.00th=[ 3163], 80.00th=[ 3458], 90.00th=[ 4178], 95.00th=[ 5080], 00:10:29.833 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 6915], 99.95th=[ 7504], 00:10:29.833 | 99.99th=[ 8455] 00:10:29.833 bw ( KiB/s): min=78080, max=81712, per=98.34%, avg=79808.00, stdev=1822.39, samples=3 00:10:29.833 iops : min=19520, max=20428, avg=19952.00, stdev=455.60, samples=3 00:10:29.833 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:29.833 lat (msec) : 2=0.40%, 4=88.24%, 10=11.32% 00:10:29.833 cpu : usr=99.15%, sys=0.15%, ctx=3, majf=0, minf=604 00:10:29.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:29.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.833 issued rwts: total=40693,40598,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.833 00:10:29.833 Run status group 0 (all jobs): 00:10:29.833 READ: bw=79.4MiB/s (83.3MB/s), 79.4MiB/s-79.4MiB/s (83.3MB/s-83.3MB/s), io=159MiB (167MB), run=2001-2001msec 00:10:29.833 WRITE: bw=79.3MiB/s (83.1MB/s), 79.3MiB/s-79.3MiB/s (83.1MB/s-83.1MB/s), io=159MiB (166MB), run=2001-2001msec 00:10:29.833 ----------------------------------------------------- 00:10:29.833 Suppressions used: 00:10:29.833 count bytes template 00:10:29.833 1 32 /usr/src/fio/parse.c 00:10:29.833 1 8 libtcmalloc_minimal.so 00:10:29.833 ----------------------------------------------------- 00:10:29.833 00:10:29.833 12:13:59 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:29.833 ************************************ 00:10:29.833 END TEST nvme_fio 00:10:29.833 ************************************ 00:10:29.833 12:13:59 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:29.833 00:10:29.833 real 0m28.768s 00:10:29.833 user 0m17.916s 00:10:29.833 sys 0m19.342s 00:10:29.833 12:13:59 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.833 12:13:59 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:29.833 ************************************ 00:10:29.833 END TEST nvme 00:10:29.834 ************************************ 00:10:29.834 00:10:29.834 real 1m39.692s 00:10:29.834 user 3m42.086s 00:10:29.834 sys 0m30.350s 00:10:29.834 12:13:59 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.834 12:13:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:29.834 12:13:59 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:29.834 12:13:59 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:29.834 12:13:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:29.834 12:13:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.834 12:13:59 -- common/autotest_common.sh@10 -- # set +x 00:10:29.834 ************************************ 00:10:29.834 START TEST nvme_scc 00:10:29.834 ************************************ 00:10:29.834 12:13:59 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:29.834 * Looking for test storage... 00:10:29.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:29.834 12:13:59 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:29.834 12:13:59 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:29.834 12:13:59 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:29.834 12:13:59 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:29.834 12:13:59 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.834 12:13:59 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:29.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.834 --rc genhtml_branch_coverage=1 00:10:29.834 --rc genhtml_function_coverage=1 00:10:29.834 --rc genhtml_legend=1 00:10:29.834 --rc geninfo_all_blocks=1 00:10:29.834 --rc geninfo_unexecuted_blocks=1 00:10:29.834 00:10:29.834 ' 00:10:29.834 12:13:59 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:29.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.834 --rc genhtml_branch_coverage=1 00:10:29.834 --rc genhtml_function_coverage=1 00:10:29.834 --rc genhtml_legend=1 00:10:29.834 --rc geninfo_all_blocks=1 00:10:29.834 --rc geninfo_unexecuted_blocks=1 00:10:29.834 00:10:29.834 ' 00:10:29.834 12:13:59 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:29.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.834 --rc genhtml_branch_coverage=1 00:10:29.834 --rc genhtml_function_coverage=1 00:10:29.834 --rc genhtml_legend=1 00:10:29.834 --rc geninfo_all_blocks=1 00:10:29.834 --rc geninfo_unexecuted_blocks=1 00:10:29.834 00:10:29.834 ' 00:10:29.834 12:13:59 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:29.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.834 --rc genhtml_branch_coverage=1 00:10:29.834 --rc genhtml_function_coverage=1 00:10:29.834 --rc genhtml_legend=1 00:10:29.834 --rc geninfo_all_blocks=1 00:10:29.834 --rc geninfo_unexecuted_blocks=1 00:10:29.834 00:10:29.834 ' 00:10:29.834 12:13:59 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:29.834 12:13:59 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:29.834 12:13:59 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:29.834 12:13:59 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:29.834 12:13:59 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.834 12:13:59 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.834 12:13:59 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.834 12:13:59 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.834 12:13:59 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.834 12:13:59 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:29.834 12:13:59 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.834 12:13:59 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:29.834 12:13:59 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:29.834 12:13:59 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:29.834 12:13:59 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:29.834 12:13:59 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:29.834 12:13:59 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:29.834 12:13:59 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:29.834 12:13:59 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:29.834 12:13:59 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:29.834 12:13:59 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:29.834 12:13:59 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:29.834 12:13:59 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:29.834 12:13:59 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:29.834 12:13:59 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:29.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:29.834 Waiting for block devices as requested 00:10:29.834 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:29.834 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:29.834 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:29.834 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:35.107 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:35.107 12:14:05 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:35.107 12:14:05 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:35.107 12:14:05 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:35.107 12:14:05 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:35.107 12:14:05 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:35.107 12:14:05 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:35.107 12:14:05 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:35.107 12:14:05 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:35.107 12:14:05 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:35.107 12:14:05 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:35.107 12:14:05 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:35.107 12:14:05 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:35.107 12:14:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:35.107 12:14:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.107 12:14:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.108 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:35.109 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:35.110 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.111 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.112 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.113 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:35.114 12:14:05 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:35.114 12:14:05 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:35.114 12:14:05 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:35.114 12:14:05 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:35.114 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.115 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:35.116 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:35.117 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:35.118 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.119 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:35.120 12:14:05 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:35.120 12:14:05 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:35.120 12:14:05 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:35.120 12:14:05 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:35.120 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:35.121 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.122 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.123 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.124 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.125 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.126 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.127 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.128 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.129 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.130 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.131 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.132 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.133 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:35.134 12:14:05 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:35.134 12:14:05 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:35.134 12:14:05 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:35.134 12:14:05 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.134 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:35.135 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:35.136 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:35.137 12:14:05 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:35.137 12:14:05 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:35.137 12:14:05 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:35.137 12:14:05 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:35.137 12:14:05 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:35.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:35.956 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:35.956 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:35.956 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:35.956 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:36.213 12:14:06 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:36.213 12:14:06 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:36.213 12:14:06 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.213 12:14:06 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:36.213 ************************************ 00:10:36.213 START TEST nvme_simple_copy 00:10:36.213 ************************************ 00:10:36.213 12:14:06 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:36.471 Initializing NVMe Controllers 00:10:36.471 Attaching to 0000:00:10.0 00:10:36.471 Controller supports SCC. Attached to 0000:00:10.0 00:10:36.471 Namespace ID: 1 size: 6GB 00:10:36.471 Initialization complete. 00:10:36.471 00:10:36.471 Controller QEMU NVMe Ctrl (12340 ) 00:10:36.471 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:36.471 Namespace Block Size:4096 00:10:36.471 Writing LBAs 0 to 63 with Random Data 00:10:36.471 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:36.471 LBAs matching Written Data: 64 00:10:36.471 00:10:36.471 real 0m0.261s 00:10:36.471 user 0m0.103s 00:10:36.471 sys 0m0.056s 00:10:36.471 12:14:07 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.471 12:14:07 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:36.471 ************************************ 00:10:36.471 END TEST nvme_simple_copy 00:10:36.471 ************************************ 00:10:36.471 ************************************ 00:10:36.471 END TEST nvme_scc 00:10:36.471 ************************************ 00:10:36.471 00:10:36.471 real 0m7.639s 00:10:36.471 user 0m1.118s 00:10:36.471 sys 0m1.374s 00:10:36.471 12:14:07 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.471 12:14:07 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:36.471 12:14:07 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:36.471 12:14:07 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:36.471 12:14:07 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:36.471 12:14:07 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:36.471 12:14:07 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:36.471 12:14:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:36.471 12:14:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.471 12:14:07 -- common/autotest_common.sh@10 -- # set +x 00:10:36.471 ************************************ 00:10:36.471 START TEST nvme_fdp 00:10:36.471 ************************************ 00:10:36.471 12:14:07 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:36.471 * Looking for test storage... 00:10:36.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:36.471 12:14:07 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:36.471 12:14:07 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:36.471 12:14:07 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:36.471 12:14:07 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:36.471 12:14:07 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.471 12:14:07 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:36.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.471 --rc genhtml_branch_coverage=1 00:10:36.471 --rc genhtml_function_coverage=1 00:10:36.471 --rc genhtml_legend=1 00:10:36.471 --rc geninfo_all_blocks=1 00:10:36.471 --rc geninfo_unexecuted_blocks=1 00:10:36.471 00:10:36.471 ' 00:10:36.471 12:14:07 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:36.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.471 --rc genhtml_branch_coverage=1 00:10:36.471 --rc genhtml_function_coverage=1 00:10:36.471 --rc genhtml_legend=1 00:10:36.471 --rc geninfo_all_blocks=1 00:10:36.471 --rc geninfo_unexecuted_blocks=1 00:10:36.471 00:10:36.471 ' 00:10:36.471 12:14:07 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:36.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.471 --rc genhtml_branch_coverage=1 00:10:36.471 --rc genhtml_function_coverage=1 00:10:36.471 --rc genhtml_legend=1 00:10:36.471 --rc geninfo_all_blocks=1 00:10:36.471 --rc geninfo_unexecuted_blocks=1 00:10:36.471 00:10:36.471 ' 00:10:36.471 12:14:07 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:36.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.471 --rc genhtml_branch_coverage=1 00:10:36.471 --rc genhtml_function_coverage=1 00:10:36.471 --rc genhtml_legend=1 00:10:36.471 --rc geninfo_all_blocks=1 00:10:36.471 --rc geninfo_unexecuted_blocks=1 00:10:36.471 00:10:36.471 ' 00:10:36.471 12:14:07 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:36.471 12:14:07 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:36.471 12:14:07 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:36.471 12:14:07 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:36.471 12:14:07 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.471 12:14:07 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.471 12:14:07 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.471 12:14:07 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.471 12:14:07 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.471 12:14:07 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:36.471 12:14:07 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.471 12:14:07 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:36.471 12:14:07 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:36.471 12:14:07 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:36.471 12:14:07 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:36.471 12:14:07 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:36.471 12:14:07 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:36.471 12:14:07 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:36.471 12:14:07 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:36.471 12:14:07 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:36.471 12:14:07 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:36.471 12:14:07 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:36.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:36.987 Waiting for block devices as requested 00:10:36.987 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:36.987 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:37.244 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:37.244 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:42.521 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:42.521 12:14:13 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:42.521 12:14:13 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:42.521 12:14:13 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:42.521 12:14:13 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:42.521 12:14:13 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:42.521 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.522 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:42.523 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:42.524 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.525 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:42.526 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:42.527 12:14:13 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:42.527 12:14:13 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:42.527 12:14:13 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:42.527 12:14:13 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.527 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.528 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:42.529 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:42.530 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:42.531 12:14:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:42.532 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:42.533 12:14:13 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:42.533 12:14:13 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:42.533 12:14:13 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:42.533 12:14:13 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:42.533 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.534 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.535 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.536 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.537 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.538 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.539 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:42.540 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.541 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.542 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.543 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.544 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:42.545 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:42.546 12:14:13 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:42.546 12:14:13 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:42.546 12:14:13 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:42.546 12:14:13 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:42.546 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.547 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.548 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:42.549 12:14:13 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:42.549 12:14:13 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:42.808 12:14:13 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:42.808 12:14:13 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:42.808 12:14:13 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:42.808 12:14:13 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:43.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:43.634 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.634 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.634 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.634 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.634 12:14:14 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:43.634 12:14:14 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:43.634 12:14:14 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.634 12:14:14 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:43.634 ************************************ 00:10:43.634 START TEST nvme_flexible_data_placement 00:10:43.634 ************************************ 00:10:43.634 12:14:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:43.892 Initializing NVMe Controllers 00:10:43.892 Attaching to 0000:00:13.0 00:10:43.892 Controller supports FDP Attached to 0000:00:13.0 00:10:43.892 Namespace ID: 1 Endurance Group ID: 1 00:10:43.892 Initialization complete. 00:10:43.892 00:10:43.892 ================================== 00:10:43.892 == FDP tests for Namespace: #01 == 00:10:43.892 ================================== 00:10:43.892 00:10:43.892 Get Feature: FDP: 00:10:43.892 ================= 00:10:43.892 Enabled: Yes 00:10:43.892 FDP configuration Index: 0 00:10:43.892 00:10:43.892 FDP configurations log page 00:10:43.892 =========================== 00:10:43.892 Number of FDP configurations: 1 00:10:43.892 Version: 0 00:10:43.892 Size: 112 00:10:43.892 FDP Configuration Descriptor: 0 00:10:43.892 Descriptor Size: 96 00:10:43.892 Reclaim Group Identifier format: 2 00:10:43.892 FDP Volatile Write Cache: Not Present 00:10:43.892 FDP Configuration: Valid 00:10:43.892 Vendor Specific Size: 0 00:10:43.892 Number of Reclaim Groups: 2 00:10:43.892 Number of Recalim Unit Handles: 8 00:10:43.892 Max Placement Identifiers: 128 00:10:43.892 Number of Namespaces Suppprted: 256 00:10:43.892 Reclaim unit Nominal Size: 6000000 bytes 00:10:43.892 Estimated Reclaim Unit Time Limit: Not Reported 00:10:43.892 RUH Desc #000: RUH Type: Initially Isolated 00:10:43.892 RUH Desc #001: RUH Type: Initially Isolated 00:10:43.892 RUH Desc #002: RUH Type: Initially Isolated 00:10:43.892 RUH Desc #003: RUH Type: Initially Isolated 00:10:43.892 RUH Desc #004: RUH Type: Initially Isolated 00:10:43.892 RUH Desc #005: RUH Type: Initially Isolated 00:10:43.892 RUH Desc #006: RUH Type: Initially Isolated 00:10:43.892 RUH Desc #007: RUH Type: Initially Isolated 00:10:43.892 00:10:43.892 FDP reclaim unit handle usage log page 00:10:43.892 ====================================== 00:10:43.892 Number of Reclaim Unit Handles: 8 00:10:43.892 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:43.892 RUH Usage Desc #001: RUH Attributes: Unused 00:10:43.892 RUH Usage Desc #002: RUH Attributes: Unused 00:10:43.892 RUH Usage Desc #003: RUH Attributes: Unused 00:10:43.892 RUH Usage Desc #004: RUH Attributes: Unused 00:10:43.892 RUH Usage Desc #005: RUH Attributes: Unused 00:10:43.892 RUH Usage Desc #006: RUH Attributes: Unused 00:10:43.892 RUH Usage Desc #007: RUH Attributes: Unused 00:10:43.892 00:10:43.892 FDP statistics log page 00:10:43.892 ======================= 00:10:43.892 Host bytes with metadata written: 938758144 00:10:43.892 Media bytes with metadata written: 938889216 00:10:43.892 Media bytes erased: 0 00:10:43.892 00:10:43.892 FDP Reclaim unit handle status 00:10:43.892 ============================== 00:10:43.892 Number of RUHS descriptors: 2 00:10:43.892 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000040bb 00:10:43.892 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:43.892 00:10:43.892 FDP write on placement id: 0 success 00:10:43.892 00:10:43.892 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:43.892 00:10:43.892 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:43.892 00:10:43.892 Get Feature: FDP Events for Placement handle: #0 00:10:43.892 ======================== 00:10:43.892 Number of FDP Events: 6 00:10:43.892 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:43.892 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:43.892 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:43.892 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:43.892 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:43.892 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:43.892 00:10:43.892 FDP events log page 00:10:43.892 =================== 00:10:43.892 Number of FDP events: 1 00:10:43.892 FDP Event #0: 00:10:43.892 Event Type: RU Not Written to Capacity 00:10:43.892 Placement Identifier: Valid 00:10:43.892 NSID: Valid 00:10:43.892 Location: Valid 00:10:43.892 Placement Identifier: 0 00:10:43.892 Event Timestamp: 6 00:10:43.892 Namespace Identifier: 1 00:10:43.892 Reclaim Group Identifier: 0 00:10:43.892 Reclaim Unit Handle Identifier: 0 00:10:43.892 00:10:43.892 FDP test passed 00:10:43.892 00:10:43.892 real 0m0.243s 00:10:43.892 user 0m0.081s 00:10:43.892 sys 0m0.061s 00:10:43.892 12:14:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.892 12:14:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:43.892 ************************************ 00:10:43.892 END TEST nvme_flexible_data_placement 00:10:43.892 ************************************ 00:10:43.892 00:10:43.892 real 0m7.562s 00:10:43.892 user 0m1.043s 00:10:43.892 sys 0m1.374s 00:10:43.892 12:14:14 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.892 12:14:14 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:43.892 ************************************ 00:10:43.892 END TEST nvme_fdp 00:10:43.892 ************************************ 00:10:43.892 12:14:14 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:43.892 12:14:14 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:43.892 12:14:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:43.892 12:14:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.892 12:14:14 -- common/autotest_common.sh@10 -- # set +x 00:10:44.151 ************************************ 00:10:44.151 START TEST nvme_rpc 00:10:44.151 ************************************ 00:10:44.151 12:14:14 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:44.151 * Looking for test storage... 00:10:44.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:44.151 12:14:14 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:44.151 12:14:14 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:44.151 12:14:14 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:44.151 12:14:14 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.151 12:14:14 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:44.151 12:14:14 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.151 12:14:14 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:44.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.151 --rc genhtml_branch_coverage=1 00:10:44.151 --rc genhtml_function_coverage=1 00:10:44.151 --rc genhtml_legend=1 00:10:44.151 --rc geninfo_all_blocks=1 00:10:44.152 --rc geninfo_unexecuted_blocks=1 00:10:44.152 00:10:44.152 ' 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:44.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.152 --rc genhtml_branch_coverage=1 00:10:44.152 --rc genhtml_function_coverage=1 00:10:44.152 --rc genhtml_legend=1 00:10:44.152 --rc geninfo_all_blocks=1 00:10:44.152 --rc geninfo_unexecuted_blocks=1 00:10:44.152 00:10:44.152 ' 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:44.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.152 --rc genhtml_branch_coverage=1 00:10:44.152 --rc genhtml_function_coverage=1 00:10:44.152 --rc genhtml_legend=1 00:10:44.152 --rc geninfo_all_blocks=1 00:10:44.152 --rc geninfo_unexecuted_blocks=1 00:10:44.152 00:10:44.152 ' 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:44.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.152 --rc genhtml_branch_coverage=1 00:10:44.152 --rc genhtml_function_coverage=1 00:10:44.152 --rc genhtml_legend=1 00:10:44.152 --rc geninfo_all_blocks=1 00:10:44.152 --rc geninfo_unexecuted_blocks=1 00:10:44.152 00:10:44.152 ' 00:10:44.152 12:14:14 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:44.152 12:14:14 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:44.152 12:14:14 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:44.152 12:14:14 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66004 00:10:44.152 12:14:14 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:44.152 12:14:14 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66004 00:10:44.152 12:14:14 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 66004 ']' 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.152 12:14:14 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.422 [2024-12-05 12:14:15.041934] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:10:44.422 [2024-12-05 12:14:15.042075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66004 ] 00:10:44.422 [2024-12-05 12:14:15.206285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:44.690 [2024-12-05 12:14:15.323770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.691 [2024-12-05 12:14:15.323886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.255 12:14:15 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.255 12:14:15 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:45.255 12:14:15 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:45.513 Nvme0n1 00:10:45.513 12:14:16 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:45.513 12:14:16 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:45.771 request: 00:10:45.771 { 00:10:45.771 "bdev_name": "Nvme0n1", 00:10:45.771 "filename": "non_existing_file", 00:10:45.771 "method": "bdev_nvme_apply_firmware", 00:10:45.771 "req_id": 1 00:10:45.771 } 00:10:45.771 Got JSON-RPC error response 00:10:45.771 response: 00:10:45.771 { 00:10:45.771 "code": -32603, 00:10:45.771 "message": "open file failed." 00:10:45.771 } 00:10:45.771 12:14:16 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:45.771 12:14:16 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:45.771 12:14:16 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:45.771 12:14:16 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:45.771 12:14:16 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66004 00:10:45.771 12:14:16 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 66004 ']' 00:10:45.771 12:14:16 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 66004 00:10:45.771 12:14:16 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:46.029 12:14:16 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.029 12:14:16 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66004 00:10:46.029 12:14:16 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.029 12:14:16 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.029 12:14:16 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66004' 00:10:46.029 killing process with pid 66004 00:10:46.029 12:14:16 nvme_rpc -- common/autotest_common.sh@973 -- # kill 66004 00:10:46.029 12:14:16 nvme_rpc -- common/autotest_common.sh@978 -- # wait 66004 00:10:47.452 00:10:47.452 real 0m3.365s 00:10:47.452 user 0m6.366s 00:10:47.452 sys 0m0.557s 00:10:47.452 12:14:18 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.452 12:14:18 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.452 ************************************ 00:10:47.452 END TEST nvme_rpc 00:10:47.452 ************************************ 00:10:47.452 12:14:18 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:47.452 12:14:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:47.452 12:14:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.452 12:14:18 -- common/autotest_common.sh@10 -- # set +x 00:10:47.452 ************************************ 00:10:47.452 START TEST nvme_rpc_timeouts 00:10:47.452 ************************************ 00:10:47.452 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:47.452 * Looking for test storage... 00:10:47.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:47.452 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:47.452 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:47.452 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:47.452 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.452 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.453 12:14:18 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:47.453 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.453 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:47.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.453 --rc genhtml_branch_coverage=1 00:10:47.453 --rc genhtml_function_coverage=1 00:10:47.453 --rc genhtml_legend=1 00:10:47.453 --rc geninfo_all_blocks=1 00:10:47.453 --rc geninfo_unexecuted_blocks=1 00:10:47.453 00:10:47.453 ' 00:10:47.453 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:47.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.453 --rc genhtml_branch_coverage=1 00:10:47.453 --rc genhtml_function_coverage=1 00:10:47.453 --rc genhtml_legend=1 00:10:47.453 --rc geninfo_all_blocks=1 00:10:47.453 --rc geninfo_unexecuted_blocks=1 00:10:47.453 00:10:47.453 ' 00:10:47.453 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:47.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.453 --rc genhtml_branch_coverage=1 00:10:47.453 --rc genhtml_function_coverage=1 00:10:47.453 --rc genhtml_legend=1 00:10:47.453 --rc geninfo_all_blocks=1 00:10:47.453 --rc geninfo_unexecuted_blocks=1 00:10:47.453 00:10:47.453 ' 00:10:47.453 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:47.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.453 --rc genhtml_branch_coverage=1 00:10:47.453 --rc genhtml_function_coverage=1 00:10:47.453 --rc genhtml_legend=1 00:10:47.453 --rc geninfo_all_blocks=1 00:10:47.453 --rc geninfo_unexecuted_blocks=1 00:10:47.453 00:10:47.453 ' 00:10:47.453 12:14:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.453 12:14:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66069 00:10:47.453 12:14:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66069 00:10:47.453 12:14:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66106 00:10:47.453 12:14:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:47.453 12:14:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66106 00:10:47.453 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 66106 ']' 00:10:47.453 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.453 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.453 12:14:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:47.453 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.453 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.453 12:14:18 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:47.710 [2024-12-05 12:14:18.385596] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:10:47.710 [2024-12-05 12:14:18.385737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66106 ] 00:10:47.710 [2024-12-05 12:14:18.549041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:47.968 [2024-12-05 12:14:18.650991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.968 [2024-12-05 12:14:18.651066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.533 Checking default timeout settings: 00:10:48.533 12:14:19 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.533 12:14:19 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:48.533 12:14:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:48.533 12:14:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:48.790 Making settings changes with rpc: 00:10:48.790 12:14:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:48.790 12:14:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:49.048 Check default vs. modified settings: 00:10:49.048 12:14:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:49.048 12:14:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66069 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66069 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.305 Setting action_on_timeout is changed as expected. 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66069 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66069 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.305 Setting timeout_us is changed as expected. 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66069 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66069 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.305 Setting timeout_admin_us is changed as expected. 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66069 /tmp/settings_modified_66069 00:10:49.305 12:14:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66106 00:10:49.305 12:14:20 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 66106 ']' 00:10:49.305 12:14:20 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 66106 00:10:49.305 12:14:20 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:49.563 12:14:20 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.563 12:14:20 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66106 00:10:49.563 killing process with pid 66106 00:10:49.563 12:14:20 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.563 12:14:20 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.563 12:14:20 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66106' 00:10:49.563 12:14:20 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 66106 00:10:49.563 12:14:20 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 66106 00:10:50.935 RPC TIMEOUT SETTING TEST PASSED. 00:10:50.935 12:14:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:50.935 00:10:50.935 real 0m3.284s 00:10:50.935 user 0m6.379s 00:10:50.935 sys 0m0.550s 00:10:50.935 12:14:21 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.935 12:14:21 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:50.935 ************************************ 00:10:50.935 END TEST nvme_rpc_timeouts 00:10:50.935 ************************************ 00:10:50.935 12:14:21 -- spdk/autotest.sh@239 -- # uname -s 00:10:50.935 12:14:21 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:50.935 12:14:21 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:50.935 12:14:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:50.935 12:14:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.935 12:14:21 -- common/autotest_common.sh@10 -- # set +x 00:10:50.935 ************************************ 00:10:50.935 START TEST sw_hotplug 00:10:50.935 ************************************ 00:10:50.935 12:14:21 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:50.935 * Looking for test storage... 00:10:50.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:50.935 12:14:21 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:50.935 12:14:21 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:50.935 12:14:21 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:50.935 12:14:21 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.935 12:14:21 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:50.935 12:14:21 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.935 12:14:21 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:50.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.935 --rc genhtml_branch_coverage=1 00:10:50.935 --rc genhtml_function_coverage=1 00:10:50.935 --rc genhtml_legend=1 00:10:50.935 --rc geninfo_all_blocks=1 00:10:50.935 --rc geninfo_unexecuted_blocks=1 00:10:50.935 00:10:50.935 ' 00:10:50.935 12:14:21 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:50.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.935 --rc genhtml_branch_coverage=1 00:10:50.935 --rc genhtml_function_coverage=1 00:10:50.935 --rc genhtml_legend=1 00:10:50.936 --rc geninfo_all_blocks=1 00:10:50.936 --rc geninfo_unexecuted_blocks=1 00:10:50.936 00:10:50.936 ' 00:10:50.936 12:14:21 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:50.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.936 --rc genhtml_branch_coverage=1 00:10:50.936 --rc genhtml_function_coverage=1 00:10:50.936 --rc genhtml_legend=1 00:10:50.936 --rc geninfo_all_blocks=1 00:10:50.936 --rc geninfo_unexecuted_blocks=1 00:10:50.936 00:10:50.936 ' 00:10:50.936 12:14:21 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:50.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.936 --rc genhtml_branch_coverage=1 00:10:50.936 --rc genhtml_function_coverage=1 00:10:50.936 --rc genhtml_legend=1 00:10:50.936 --rc geninfo_all_blocks=1 00:10:50.936 --rc geninfo_unexecuted_blocks=1 00:10:50.936 00:10:50.936 ' 00:10:50.936 12:14:21 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:51.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:51.192 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:51.192 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:51.192 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:51.192 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:51.451 12:14:22 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:51.451 12:14:22 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:51.451 12:14:22 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:51.451 12:14:22 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:51.451 12:14:22 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:51.451 12:14:22 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:51.451 12:14:22 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:51.451 12:14:22 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:51.709 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:51.967 Waiting for block devices as requested 00:10:51.967 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:51.967 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:51.967 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:51.967 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:57.270 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:57.270 12:14:27 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:57.270 12:14:27 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:57.527 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:57.527 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:57.527 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:57.785 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:58.043 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:58.043 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:58.043 12:14:28 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:58.043 12:14:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.301 12:14:28 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:58.301 12:14:28 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:58.301 12:14:28 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66967 00:10:58.301 12:14:28 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:58.301 12:14:28 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:58.301 12:14:28 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:58.301 12:14:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:58.301 12:14:28 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:58.301 12:14:28 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:58.301 12:14:28 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:58.301 12:14:28 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:58.301 12:14:28 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:58.301 12:14:28 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:58.301 12:14:28 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:58.301 12:14:28 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:58.301 12:14:28 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:58.301 12:14:28 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:58.301 Initializing NVMe Controllers 00:10:58.301 Attaching to 0000:00:10.0 00:10:58.301 Attaching to 0000:00:11.0 00:10:58.301 Attached to 0000:00:10.0 00:10:58.301 Attached to 0000:00:11.0 00:10:58.301 Initialization complete. Starting I/O... 00:10:58.560 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:58.560 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:58.560 00:10:59.493 QEMU NVMe Ctrl (12340 ): 2388 I/Os completed (+2388) 00:10:59.493 QEMU NVMe Ctrl (12341 ): 2442 I/Os completed (+2442) 00:10:59.493 00:11:00.432 QEMU NVMe Ctrl (12340 ): 5300 I/Os completed (+2912) 00:11:00.432 QEMU NVMe Ctrl (12341 ): 5366 I/Os completed (+2924) 00:11:00.432 00:11:01.365 QEMU NVMe Ctrl (12340 ): 8225 I/Os completed (+2925) 00:11:01.365 QEMU NVMe Ctrl (12341 ): 8595 I/Os completed (+3229) 00:11:01.365 00:11:02.738 QEMU NVMe Ctrl (12340 ): 11824 I/Os completed (+3599) 00:11:02.738 QEMU NVMe Ctrl (12341 ): 12550 I/Os completed (+3955) 00:11:02.738 00:11:03.669 QEMU NVMe Ctrl (12340 ): 15511 I/Os completed (+3687) 00:11:03.669 QEMU NVMe Ctrl (12341 ): 17057 I/Os completed (+4507) 00:11:03.669 00:11:04.236 12:14:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:04.236 12:14:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.236 12:14:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.236 [2024-12-05 12:14:34.968386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:04.236 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:04.236 [2024-12-05 12:14:34.969635] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 [2024-12-05 12:14:34.969693] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 [2024-12-05 12:14:34.969713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 [2024-12-05 12:14:34.969734] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:04.236 [2024-12-05 12:14:34.971766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 [2024-12-05 12:14:34.971815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 [2024-12-05 12:14:34.971831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 [2024-12-05 12:14:34.971846] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 12:14:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.236 12:14:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.236 [2024-12-05 12:14:34.991497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:04.236 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:04.236 [2024-12-05 12:14:34.992580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 [2024-12-05 12:14:34.992618] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 [2024-12-05 12:14:34.992641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 [2024-12-05 12:14:34.992660] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:04.236 [2024-12-05 12:14:34.994380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 [2024-12-05 12:14:34.994417] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 [2024-12-05 12:14:34.994433] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 [2024-12-05 12:14:34.994447] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.236 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:04.236 EAL: Scan for (pci) bus failed. 00:11:04.236 12:14:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:04.236 12:14:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:04.236 12:14:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.236 12:14:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.236 12:14:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:04.495 12:14:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:04.495 12:14:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.495 12:14:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.495 12:14:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.495 12:14:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:04.495 Attaching to 0000:00:10.0 00:11:04.495 Attached to 0000:00:10.0 00:11:04.495 QEMU NVMe Ctrl (12340 ): 48 I/Os completed (+48) 00:11:04.495 00:11:04.495 12:14:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:04.495 12:14:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.495 12:14:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:04.495 Attaching to 0000:00:11.0 00:11:04.495 Attached to 0000:00:11.0 00:11:05.429 QEMU NVMe Ctrl (12340 ): 3515 I/Os completed (+3467) 00:11:05.429 QEMU NVMe Ctrl (12341 ): 3371 I/Os completed (+3371) 00:11:05.429 00:11:06.365 QEMU NVMe Ctrl (12340 ): 7230 I/Os completed (+3715) 00:11:06.365 QEMU NVMe Ctrl (12341 ): 6895 I/Os completed (+3524) 00:11:06.365 00:11:07.737 QEMU NVMe Ctrl (12340 ): 10775 I/Os completed (+3545) 00:11:07.737 QEMU NVMe Ctrl (12341 ): 10347 I/Os completed (+3452) 00:11:07.737 00:11:08.303 QEMU NVMe Ctrl (12340 ): 14193 I/Os completed (+3418) 00:11:08.303 QEMU NVMe Ctrl (12341 ): 13811 I/Os completed (+3464) 00:11:08.303 00:11:09.676 QEMU NVMe Ctrl (12340 ): 17770 I/Os completed (+3577) 00:11:09.676 QEMU NVMe Ctrl (12341 ): 17359 I/Os completed (+3548) 00:11:09.676 00:11:10.610 QEMU NVMe Ctrl (12340 ): 21487 I/Os completed (+3717) 00:11:10.610 QEMU NVMe Ctrl (12341 ): 21312 I/Os completed (+3953) 00:11:10.610 00:11:11.544 QEMU NVMe Ctrl (12340 ): 24850 I/Os completed (+3363) 00:11:11.544 QEMU NVMe Ctrl (12341 ): 25460 I/Os completed (+4148) 00:11:11.544 00:11:12.493 QEMU NVMe Ctrl (12340 ): 28429 I/Os completed (+3579) 00:11:12.493 QEMU NVMe Ctrl (12341 ): 28924 I/Os completed (+3464) 00:11:12.493 00:11:13.425 QEMU NVMe Ctrl (12340 ): 31931 I/Os completed (+3502) 00:11:13.425 QEMU NVMe Ctrl (12341 ): 32402 I/Os completed (+3478) 00:11:13.425 00:11:14.355 QEMU NVMe Ctrl (12340 ): 35482 I/Os completed (+3551) 00:11:14.355 QEMU NVMe Ctrl (12341 ): 35894 I/Os completed (+3492) 00:11:14.355 00:11:15.723 QEMU NVMe Ctrl (12340 ): 38977 I/Os completed (+3495) 00:11:15.723 QEMU NVMe Ctrl (12341 ): 39326 I/Os completed (+3432) 00:11:15.723 00:11:16.656 QEMU NVMe Ctrl (12340 ): 42545 I/Os completed (+3568) 00:11:16.656 QEMU NVMe Ctrl (12341 ): 42847 I/Os completed (+3521) 00:11:16.656 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.656 [2024-12-05 12:14:47.234253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:16.656 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:16.656 [2024-12-05 12:14:47.235230] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 [2024-12-05 12:14:47.235275] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 [2024-12-05 12:14:47.235291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 [2024-12-05 12:14:47.235309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:16.656 [2024-12-05 12:14:47.236969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 [2024-12-05 12:14:47.237013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 [2024-12-05 12:14:47.237026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 [2024-12-05 12:14:47.237039] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.656 [2024-12-05 12:14:47.256854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:16.656 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:16.656 [2024-12-05 12:14:47.257753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 [2024-12-05 12:14:47.257793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 [2024-12-05 12:14:47.257813] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 [2024-12-05 12:14:47.257827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:16.656 [2024-12-05 12:14:47.259245] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 [2024-12-05 12:14:47.259278] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 [2024-12-05 12:14:47.259292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 [2024-12-05 12:14:47.259304] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:16.656 Attaching to 0000:00:10.0 00:11:16.656 Attached to 0000:00:10.0 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.656 12:14:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:16.656 Attaching to 0000:00:11.0 00:11:16.656 Attached to 0000:00:11.0 00:11:17.592 QEMU NVMe Ctrl (12340 ): 2712 I/Os completed (+2712) 00:11:17.592 QEMU NVMe Ctrl (12341 ): 2335 I/Os completed (+2335) 00:11:17.592 00:11:18.525 QEMU NVMe Ctrl (12340 ): 6556 I/Os completed (+3844) 00:11:18.525 QEMU NVMe Ctrl (12341 ): 5846 I/Os completed (+3511) 00:11:18.525 00:11:19.459 QEMU NVMe Ctrl (12340 ): 10413 I/Os completed (+3857) 00:11:19.459 QEMU NVMe Ctrl (12341 ): 9353 I/Os completed (+3507) 00:11:19.459 00:11:20.394 QEMU NVMe Ctrl (12340 ): 14195 I/Os completed (+3782) 00:11:20.394 QEMU NVMe Ctrl (12341 ): 12859 I/Os completed (+3506) 00:11:20.394 00:11:21.329 QEMU NVMe Ctrl (12340 ): 17738 I/Os completed (+3543) 00:11:21.329 QEMU NVMe Ctrl (12341 ): 16339 I/Os completed (+3480) 00:11:21.329 00:11:22.703 QEMU NVMe Ctrl (12340 ): 21388 I/Os completed (+3650) 00:11:22.703 QEMU NVMe Ctrl (12341 ): 19783 I/Os completed (+3444) 00:11:22.703 00:11:23.637 QEMU NVMe Ctrl (12340 ): 25098 I/Os completed (+3710) 00:11:23.637 QEMU NVMe Ctrl (12341 ): 23270 I/Os completed (+3487) 00:11:23.637 00:11:24.577 QEMU NVMe Ctrl (12340 ): 28716 I/Os completed (+3618) 00:11:24.577 QEMU NVMe Ctrl (12341 ): 26810 I/Os completed (+3540) 00:11:24.577 00:11:25.530 QEMU NVMe Ctrl (12340 ): 31987 I/Os completed (+3271) 00:11:25.530 QEMU NVMe Ctrl (12341 ): 30268 I/Os completed (+3458) 00:11:25.530 00:11:26.463 QEMU NVMe Ctrl (12340 ): 35122 I/Os completed (+3135) 00:11:26.463 QEMU NVMe Ctrl (12341 ): 34051 I/Os completed (+3783) 00:11:26.463 00:11:27.396 QEMU NVMe Ctrl (12340 ): 38360 I/Os completed (+3238) 00:11:27.396 QEMU NVMe Ctrl (12341 ): 37923 I/Os completed (+3872) 00:11:27.396 00:11:28.329 QEMU NVMe Ctrl (12340 ): 41974 I/Os completed (+3614) 00:11:28.329 QEMU NVMe Ctrl (12341 ): 41408 I/Os completed (+3485) 00:11:28.329 00:11:28.899 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:28.899 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:28.899 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:28.899 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:28.899 [2024-12-05 12:14:59.505341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:28.899 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:28.899 [2024-12-05 12:14:59.506621] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.899 [2024-12-05 12:14:59.506674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.899 [2024-12-05 12:14:59.506691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.899 [2024-12-05 12:14:59.506708] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.899 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:28.899 [2024-12-05 12:14:59.508776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.899 [2024-12-05 12:14:59.508822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.899 [2024-12-05 12:14:59.508839] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.899 [2024-12-05 12:14:59.508858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.899 EAL: Cannot open sysfs resource 00:11:28.899 EAL: pci_scan_one(): cannot parse resource 00:11:28.899 EAL: Scan for (pci) bus failed. 00:11:28.899 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:28.899 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:28.899 [2024-12-05 12:14:59.528541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:28.899 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:28.899 [2024-12-05 12:14:59.529634] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.899 [2024-12-05 12:14:59.529681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.899 [2024-12-05 12:14:59.529700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.900 [2024-12-05 12:14:59.529715] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.900 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:28.900 [2024-12-05 12:14:59.531430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.900 [2024-12-05 12:14:59.531483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.900 [2024-12-05 12:14:59.531502] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.900 [2024-12-05 12:14:59.531518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.900 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:28.900 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:28.900 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:28.900 EAL: Scan for (pci) bus failed. 00:11:28.900 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:28.900 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:28.900 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:28.900 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:28.900 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:28.900 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:28.900 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:28.900 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:28.900 Attaching to 0000:00:10.0 00:11:28.900 Attached to 0000:00:10.0 00:11:29.161 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:29.161 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.161 12:14:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:29.161 Attaching to 0000:00:11.0 00:11:29.161 Attached to 0000:00:11.0 00:11:29.161 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:29.161 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:29.161 [2024-12-05 12:14:59.804615] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:41.441 12:15:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:41.441 12:15:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:41.441 12:15:11 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.83 00:11:41.441 12:15:11 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.83 00:11:41.441 12:15:11 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:41.441 12:15:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.83 00:11:41.441 12:15:11 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.83 2 00:11:41.441 remove_attach_helper took 42.83s to complete (handling 2 nvme drive(s)) 12:15:11 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:48.022 12:15:17 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66967 00:11:48.022 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66967) - No such process 00:11:48.022 12:15:17 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66967 00:11:48.022 12:15:17 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:48.022 12:15:17 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:48.022 12:15:17 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:48.022 12:15:17 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67508 00:11:48.022 12:15:17 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:48.022 12:15:17 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67508 00:11:48.022 12:15:17 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67508 ']' 00:11:48.022 12:15:17 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.022 12:15:17 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.022 12:15:17 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.022 12:15:17 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.022 12:15:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.022 12:15:17 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:48.022 [2024-12-05 12:15:17.890533] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:11:48.022 [2024-12-05 12:15:17.890841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67508 ] 00:11:48.022 [2024-12-05 12:15:18.048695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.022 [2024-12-05 12:15:18.164196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.022 12:15:18 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.022 12:15:18 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:48.022 12:15:18 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:48.022 12:15:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.022 12:15:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.022 12:15:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.022 12:15:18 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:48.022 12:15:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:48.022 12:15:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:48.022 12:15:18 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:48.022 12:15:18 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:48.022 12:15:18 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:48.022 12:15:18 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:48.022 12:15:18 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:48.022 12:15:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:48.022 12:15:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:48.022 12:15:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:48.022 12:15:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:48.022 12:15:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:54.585 12:15:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:54.585 12:15:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:54.585 12:15:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:54.585 12:15:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:54.585 12:15:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:54.585 12:15:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:54.585 12:15:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:54.585 12:15:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:54.585 12:15:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.585 12:15:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.585 12:15:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.585 12:15:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.585 12:15:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.585 12:15:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.585 12:15:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:54.585 12:15:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:54.585 [2024-12-05 12:15:24.915881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:54.585 [2024-12-05 12:15:24.917368] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.585 [2024-12-05 12:15:24.917409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.585 [2024-12-05 12:15:24.917424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.585 [2024-12-05 12:15:24.917446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.585 [2024-12-05 12:15:24.917454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.585 [2024-12-05 12:15:24.917472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.585 [2024-12-05 12:15:24.917481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.585 [2024-12-05 12:15:24.917489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.585 [2024-12-05 12:15:24.917497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.585 [2024-12-05 12:15:24.917509] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.585 [2024-12-05 12:15:24.917515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.585 [2024-12-05 12:15:24.917523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.585 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:54.585 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:54.585 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:54.585 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.585 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.585 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.585 12:15:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.585 12:15:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.585 12:15:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.585 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:54.585 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:54.843 [2024-12-05 12:15:25.515881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:54.843 [2024-12-05 12:15:25.517339] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.843 [2024-12-05 12:15:25.517374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.843 [2024-12-05 12:15:25.517387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.843 [2024-12-05 12:15:25.517407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.843 [2024-12-05 12:15:25.517416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.843 [2024-12-05 12:15:25.517424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.843 [2024-12-05 12:15:25.517433] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.843 [2024-12-05 12:15:25.517440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.843 [2024-12-05 12:15:25.517448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.843 [2024-12-05 12:15:25.517455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.843 [2024-12-05 12:15:25.517476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.843 [2024-12-05 12:15:25.517483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.103 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:55.103 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.103 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.103 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.103 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.103 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.103 12:15:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.103 12:15:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.103 12:15:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.372 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:55.372 12:15:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:55.372 12:15:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.372 12:15:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.372 12:15:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:55.372 12:15:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:55.372 12:15:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:55.372 12:15:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.372 12:15:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.372 12:15:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:55.372 12:15:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:55.372 12:15:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:55.372 12:15:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:07.573 12:15:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.573 12:15:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.573 12:15:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:07.573 12:15:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:07.573 12:15:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.573 12:15:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:07.573 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:07.573 [2024-12-05 12:15:38.316060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:07.573 [2024-12-05 12:15:38.317620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.573 [2024-12-05 12:15:38.317660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.573 [2024-12-05 12:15:38.317672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.573 [2024-12-05 12:15:38.317695] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.573 [2024-12-05 12:15:38.317704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.573 [2024-12-05 12:15:38.317713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.573 [2024-12-05 12:15:38.317721] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.573 [2024-12-05 12:15:38.317730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.573 [2024-12-05 12:15:38.317737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.573 [2024-12-05 12:15:38.317746] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.573 [2024-12-05 12:15:38.317753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.573 [2024-12-05 12:15:38.317762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.151 [2024-12-05 12:15:38.716059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:08.152 [2024-12-05 12:15:38.717517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.152 [2024-12-05 12:15:38.717551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.152 [2024-12-05 12:15:38.717565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.152 [2024-12-05 12:15:38.717584] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.152 [2024-12-05 12:15:38.717593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.152 [2024-12-05 12:15:38.717601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.152 [2024-12-05 12:15:38.717610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.152 [2024-12-05 12:15:38.717617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.152 [2024-12-05 12:15:38.717625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.152 [2024-12-05 12:15:38.717632] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.152 [2024-12-05 12:15:38.717641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.152 [2024-12-05 12:15:38.717647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.152 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:08.152 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:08.152 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:08.152 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.152 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.152 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.152 12:15:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.152 12:15:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.152 12:15:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.152 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:08.153 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:08.153 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:08.153 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:08.153 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:08.153 12:15:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:08.153 12:15:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:08.153 12:15:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:08.153 12:15:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:08.153 12:15:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:08.415 12:15:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:08.415 12:15:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:08.415 12:15:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:20.607 12:15:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.607 12:15:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.607 12:15:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:20.607 12:15:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.607 12:15:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.607 12:15:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:20.607 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:20.607 [2024-12-05 12:15:51.216264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:20.607 [2024-12-05 12:15:51.217818] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.607 [2024-12-05 12:15:51.217850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.607 [2024-12-05 12:15:51.217861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.607 [2024-12-05 12:15:51.217882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.607 [2024-12-05 12:15:51.217890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.607 [2024-12-05 12:15:51.217902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.607 [2024-12-05 12:15:51.217911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.607 [2024-12-05 12:15:51.217920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.607 [2024-12-05 12:15:51.217927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.607 [2024-12-05 12:15:51.217941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.607 [2024-12-05 12:15:51.217948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.607 [2024-12-05 12:15:51.217956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.866 [2024-12-05 12:15:51.616265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:20.866 [2024-12-05 12:15:51.617754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.866 [2024-12-05 12:15:51.617787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.866 [2024-12-05 12:15:51.617800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.866 [2024-12-05 12:15:51.617818] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.866 [2024-12-05 12:15:51.617828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.866 [2024-12-05 12:15:51.617835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.866 [2024-12-05 12:15:51.617846] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.866 [2024-12-05 12:15:51.617853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.866 [2024-12-05 12:15:51.617864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.866 [2024-12-05 12:15:51.617872] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.866 [2024-12-05 12:15:51.617880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.866 [2024-12-05 12:15:51.617886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.866 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:20.866 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:20.866 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:20.866 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:20.866 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:20.866 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:20.866 12:15:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.866 12:15:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.866 12:15:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.124 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:21.124 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:21.124 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:21.124 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:21.125 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:21.125 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:21.125 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:21.125 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:21.125 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:21.125 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:21.125 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:21.125 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:21.125 12:15:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:33.320 12:16:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:33.320 12:16:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:33.320 12:16:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.20 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.20 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.20 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.20 2 00:12:33.320 remove_attach_helper took 45.20s to complete (handling 2 nvme drive(s)) 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:33.320 12:16:04 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:33.320 12:16:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:39.920 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:39.920 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:39.920 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:39.920 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:39.920 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:39.920 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:39.920 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:39.920 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:39.920 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:39.920 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:39.920 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:39.920 12:16:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.920 12:16:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.920 12:16:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.920 [2024-12-05 12:16:10.148650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:39.920 [2024-12-05 12:16:10.150107] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.920 [2024-12-05 12:16:10.150219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.920 [2024-12-05 12:16:10.150299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.920 [2024-12-05 12:16:10.150389] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.920 [2024-12-05 12:16:10.150436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.920 [2024-12-05 12:16:10.150524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.920 [2024-12-05 12:16:10.150587] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.920 [2024-12-05 12:16:10.150647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.920 [2024-12-05 12:16:10.150675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.920 [2024-12-05 12:16:10.150736] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.920 [2024-12-05 12:16:10.150769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.920 [2024-12-05 12:16:10.150798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.920 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:39.920 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:39.920 [2024-12-05 12:16:10.648666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:39.920 [2024-12-05 12:16:10.649977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.920 [2024-12-05 12:16:10.650125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.920 [2024-12-05 12:16:10.650193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.920 [2024-12-05 12:16:10.650284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.920 [2024-12-05 12:16:10.650305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.920 [2024-12-05 12:16:10.650455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.921 [2024-12-05 12:16:10.650536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.921 [2024-12-05 12:16:10.650557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.921 [2024-12-05 12:16:10.650610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.921 [2024-12-05 12:16:10.650662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.921 [2024-12-05 12:16:10.650684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.921 [2024-12-05 12:16:10.650731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.921 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:39.921 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:39.921 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:39.921 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:39.921 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:39.921 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:39.921 12:16:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.921 12:16:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.921 12:16:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.921 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:39.921 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:40.180 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:40.180 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:40.180 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:40.180 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:40.180 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:40.180 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:40.180 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:40.180 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:40.180 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:40.180 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:40.180 12:16:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:52.380 12:16:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:52.380 12:16:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:52.380 12:16:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:52.380 12:16:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:52.380 12:16:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:52.380 12:16:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:52.380 12:16:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.380 12:16:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:52.380 12:16:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.380 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:52.380 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:52.380 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:52.380 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:52.380 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:52.380 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:52.380 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:52.380 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:52.380 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:52.380 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:52.380 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:52.380 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:52.380 12:16:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.380 12:16:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:52.380 [2024-12-05 12:16:23.048850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:52.380 [2024-12-05 12:16:23.050154] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.380 [2024-12-05 12:16:23.050194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.380 [2024-12-05 12:16:23.050206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.380 [2024-12-05 12:16:23.050228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.380 [2024-12-05 12:16:23.050235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.380 [2024-12-05 12:16:23.050244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.380 [2024-12-05 12:16:23.050252] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.380 [2024-12-05 12:16:23.050261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.380 [2024-12-05 12:16:23.050267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.380 [2024-12-05 12:16:23.050276] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.380 [2024-12-05 12:16:23.050283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.380 [2024-12-05 12:16:23.050291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.380 12:16:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.380 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:52.380 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:52.946 [2024-12-05 12:16:23.548870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:52.946 [2024-12-05 12:16:23.552418] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.946 [2024-12-05 12:16:23.552452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.946 [2024-12-05 12:16:23.552479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.946 [2024-12-05 12:16:23.552500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.946 [2024-12-05 12:16:23.552511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.946 [2024-12-05 12:16:23.552519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.946 [2024-12-05 12:16:23.552528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.946 [2024-12-05 12:16:23.552535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.946 [2024-12-05 12:16:23.552543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.946 [2024-12-05 12:16:23.552550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.946 [2024-12-05 12:16:23.552559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.946 [2024-12-05 12:16:23.552565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:52.946 12:16:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.946 12:16:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:52.946 12:16:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:52.946 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:53.204 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:53.204 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:53.205 12:16:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:05.407 12:16:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.407 12:16:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:05.407 12:16:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:05.407 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:05.408 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:05.408 12:16:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.408 12:16:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:05.408 12:16:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.408 [2024-12-05 12:16:35.949053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:05.408 [2024-12-05 12:16:35.950241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.408 [2024-12-05 12:16:35.950278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.408 [2024-12-05 12:16:35.950290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.408 [2024-12-05 12:16:35.950310] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.408 [2024-12-05 12:16:35.950318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.408 [2024-12-05 12:16:35.950328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.408 [2024-12-05 12:16:35.950336] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.408 [2024-12-05 12:16:35.950347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.408 [2024-12-05 12:16:35.950354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.408 [2024-12-05 12:16:35.950362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.408 [2024-12-05 12:16:35.950370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.408 [2024-12-05 12:16:35.950378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.408 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:05.408 12:16:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:05.746 [2024-12-05 12:16:36.349048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:05.746 [2024-12-05 12:16:36.350561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.746 [2024-12-05 12:16:36.350590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.746 [2024-12-05 12:16:36.350602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.746 [2024-12-05 12:16:36.350620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.746 [2024-12-05 12:16:36.350630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.746 [2024-12-05 12:16:36.350637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.746 [2024-12-05 12:16:36.350646] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.746 [2024-12-05 12:16:36.350652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.746 [2024-12-05 12:16:36.350660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.746 [2024-12-05 12:16:36.350668] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.746 [2024-12-05 12:16:36.350679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.746 [2024-12-05 12:16:36.350685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.746 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:05.746 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:05.746 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:05.746 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:05.746 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:05.746 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:05.746 12:16:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.746 12:16:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:05.746 12:16:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.746 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:05.746 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:05.746 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:05.746 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:05.746 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:06.032 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:06.032 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:06.032 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:06.032 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:06.032 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:06.032 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:06.032 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:06.032 12:16:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:18.241 12:16:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:18.241 12:16:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:18.241 12:16:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:18.241 12:16:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:18.241 12:16:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:18.241 12:16:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.241 12:16:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:18.241 12:16:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.71 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.71 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:18.241 12:16:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.71 00:13:18.241 12:16:48 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.71 2 00:13:18.241 remove_attach_helper took 44.71s to complete (handling 2 nvme drive(s)) 12:16:48 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:18.241 12:16:48 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67508 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67508 ']' 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67508 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67508 00:13:18.241 killing process with pid 67508 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67508' 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67508 00:13:18.241 12:16:48 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67508 00:13:19.618 12:16:50 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:19.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:20.190 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:20.190 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:20.190 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:20.190 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:20.190 00:13:20.190 real 2m29.469s 00:13:20.190 user 1m51.959s 00:13:20.190 sys 0m16.203s 00:13:20.190 12:16:50 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.190 ************************************ 00:13:20.190 END TEST sw_hotplug 00:13:20.190 ************************************ 00:13:20.190 12:16:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 12:16:51 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:20.190 12:16:51 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:20.190 12:16:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:20.190 12:16:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.190 12:16:51 -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 ************************************ 00:13:20.190 START TEST nvme_xnvme 00:13:20.190 ************************************ 00:13:20.190 12:16:51 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:20.455 * Looking for test storage... 00:13:20.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.455 12:16:51 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:20.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.455 --rc genhtml_branch_coverage=1 00:13:20.455 --rc genhtml_function_coverage=1 00:13:20.455 --rc genhtml_legend=1 00:13:20.455 --rc geninfo_all_blocks=1 00:13:20.455 --rc geninfo_unexecuted_blocks=1 00:13:20.455 00:13:20.455 ' 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:20.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.455 --rc genhtml_branch_coverage=1 00:13:20.455 --rc genhtml_function_coverage=1 00:13:20.455 --rc genhtml_legend=1 00:13:20.455 --rc geninfo_all_blocks=1 00:13:20.455 --rc geninfo_unexecuted_blocks=1 00:13:20.455 00:13:20.455 ' 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:20.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.455 --rc genhtml_branch_coverage=1 00:13:20.455 --rc genhtml_function_coverage=1 00:13:20.455 --rc genhtml_legend=1 00:13:20.455 --rc geninfo_all_blocks=1 00:13:20.455 --rc geninfo_unexecuted_blocks=1 00:13:20.455 00:13:20.455 ' 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:20.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.455 --rc genhtml_branch_coverage=1 00:13:20.455 --rc genhtml_function_coverage=1 00:13:20.455 --rc genhtml_legend=1 00:13:20.455 --rc geninfo_all_blocks=1 00:13:20.455 --rc geninfo_unexecuted_blocks=1 00:13:20.455 00:13:20.455 ' 00:13:20.455 12:16:51 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:13:20.455 12:16:51 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:20.455 12:16:51 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:20.455 12:16:51 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:20.456 12:16:51 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:20.456 12:16:51 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:20.456 #define SPDK_CONFIG_H 00:13:20.456 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:20.456 #define SPDK_CONFIG_APPS 1 00:13:20.456 #define SPDK_CONFIG_ARCH native 00:13:20.456 #define SPDK_CONFIG_ASAN 1 00:13:20.456 #undef SPDK_CONFIG_AVAHI 00:13:20.456 #undef SPDK_CONFIG_CET 00:13:20.456 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:20.456 #define SPDK_CONFIG_COVERAGE 1 00:13:20.456 #define SPDK_CONFIG_CROSS_PREFIX 00:13:20.456 #undef SPDK_CONFIG_CRYPTO 00:13:20.456 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:20.456 #undef SPDK_CONFIG_CUSTOMOCF 00:13:20.456 #undef SPDK_CONFIG_DAOS 00:13:20.456 #define SPDK_CONFIG_DAOS_DIR 00:13:20.456 #define SPDK_CONFIG_DEBUG 1 00:13:20.456 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:20.456 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:20.456 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:20.456 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:20.456 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:20.456 #undef SPDK_CONFIG_DPDK_UADK 00:13:20.456 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:20.456 #define SPDK_CONFIG_EXAMPLES 1 00:13:20.456 #undef SPDK_CONFIG_FC 00:13:20.456 #define SPDK_CONFIG_FC_PATH 00:13:20.456 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:20.456 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:20.456 #define SPDK_CONFIG_FSDEV 1 00:13:20.456 #undef SPDK_CONFIG_FUSE 00:13:20.456 #undef SPDK_CONFIG_FUZZER 00:13:20.456 #define SPDK_CONFIG_FUZZER_LIB 00:13:20.456 #undef SPDK_CONFIG_GOLANG 00:13:20.456 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:20.456 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:20.456 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:20.456 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:20.456 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:20.456 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:20.456 #undef SPDK_CONFIG_HAVE_LZ4 00:13:20.456 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:20.456 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:20.456 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:20.456 #define SPDK_CONFIG_IDXD 1 00:13:20.456 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:20.456 #undef SPDK_CONFIG_IPSEC_MB 00:13:20.456 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:20.456 #define SPDK_CONFIG_ISAL 1 00:13:20.456 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:20.456 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:20.456 #define SPDK_CONFIG_LIBDIR 00:13:20.456 #undef SPDK_CONFIG_LTO 00:13:20.456 #define SPDK_CONFIG_MAX_LCORES 128 00:13:20.456 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:20.456 #define SPDK_CONFIG_NVME_CUSE 1 00:13:20.456 #undef SPDK_CONFIG_OCF 00:13:20.456 #define SPDK_CONFIG_OCF_PATH 00:13:20.456 #define SPDK_CONFIG_OPENSSL_PATH 00:13:20.456 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:20.456 #define SPDK_CONFIG_PGO_DIR 00:13:20.456 #undef SPDK_CONFIG_PGO_USE 00:13:20.456 #define SPDK_CONFIG_PREFIX /usr/local 00:13:20.456 #undef SPDK_CONFIG_RAID5F 00:13:20.456 #undef SPDK_CONFIG_RBD 00:13:20.456 #define SPDK_CONFIG_RDMA 1 00:13:20.456 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:20.456 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:20.456 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:20.456 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:20.456 #define SPDK_CONFIG_SHARED 1 00:13:20.456 #undef SPDK_CONFIG_SMA 00:13:20.456 #define SPDK_CONFIG_TESTS 1 00:13:20.456 #undef SPDK_CONFIG_TSAN 00:13:20.456 #define SPDK_CONFIG_UBLK 1 00:13:20.456 #define SPDK_CONFIG_UBSAN 1 00:13:20.456 #undef SPDK_CONFIG_UNIT_TESTS 00:13:20.456 #undef SPDK_CONFIG_URING 00:13:20.456 #define SPDK_CONFIG_URING_PATH 00:13:20.456 #undef SPDK_CONFIG_URING_ZNS 00:13:20.456 #undef SPDK_CONFIG_USDT 00:13:20.456 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:20.456 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:20.456 #undef SPDK_CONFIG_VFIO_USER 00:13:20.456 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:20.456 #define SPDK_CONFIG_VHOST 1 00:13:20.456 #define SPDK_CONFIG_VIRTIO 1 00:13:20.456 #undef SPDK_CONFIG_VTUNE 00:13:20.456 #define SPDK_CONFIG_VTUNE_DIR 00:13:20.456 #define SPDK_CONFIG_WERROR 1 00:13:20.456 #define SPDK_CONFIG_WPDK_DIR 00:13:20.456 #define SPDK_CONFIG_XNVME 1 00:13:20.456 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:20.456 12:16:51 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:20.456 12:16:51 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.456 12:16:51 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.456 12:16:51 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.456 12:16:51 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.456 12:16:51 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.457 12:16:51 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.457 12:16:51 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.457 12:16:51 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.457 12:16:51 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:20.457 12:16:51 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@68 -- # uname -s 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:20.457 12:16:51 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:20.457 12:16:51 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68860 ]] 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68860 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.AIKkeY 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.AIKkeY/tests/xnvme /tmp/spdk.AIKkeY 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974532096 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593288704 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:13:20.458 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974532096 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593288704 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97250742272 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2452037632 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:20.459 * Looking for test storage... 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974532096 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:20.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:20.459 12:16:51 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:20.722 12:16:51 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:20.722 12:16:51 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.722 12:16:51 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:20.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.722 --rc genhtml_branch_coverage=1 00:13:20.722 --rc genhtml_function_coverage=1 00:13:20.722 --rc genhtml_legend=1 00:13:20.722 --rc geninfo_all_blocks=1 00:13:20.722 --rc geninfo_unexecuted_blocks=1 00:13:20.722 00:13:20.722 ' 00:13:20.722 12:16:51 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:20.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.722 --rc genhtml_branch_coverage=1 00:13:20.722 --rc genhtml_function_coverage=1 00:13:20.722 --rc genhtml_legend=1 00:13:20.722 --rc geninfo_all_blocks=1 00:13:20.722 --rc geninfo_unexecuted_blocks=1 00:13:20.722 00:13:20.722 ' 00:13:20.722 12:16:51 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:20.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.722 --rc genhtml_branch_coverage=1 00:13:20.722 --rc genhtml_function_coverage=1 00:13:20.722 --rc genhtml_legend=1 00:13:20.722 --rc geninfo_all_blocks=1 00:13:20.722 --rc geninfo_unexecuted_blocks=1 00:13:20.722 00:13:20.722 ' 00:13:20.722 12:16:51 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:20.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.722 --rc genhtml_branch_coverage=1 00:13:20.722 --rc genhtml_function_coverage=1 00:13:20.722 --rc genhtml_legend=1 00:13:20.722 --rc geninfo_all_blocks=1 00:13:20.722 --rc geninfo_unexecuted_blocks=1 00:13:20.722 00:13:20.722 ' 00:13:20.722 12:16:51 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.722 12:16:51 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.722 12:16:51 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.722 12:16:51 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.722 12:16:51 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.722 12:16:51 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:20.722 12:16:51 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:13:20.722 12:16:51 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:20.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:21.245 Waiting for block devices as requested 00:13:21.245 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:21.245 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:21.245 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:21.508 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:26.797 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:26.797 12:16:57 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:13:26.797 12:16:57 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:13:26.797 12:16:57 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:13:27.059 12:16:57 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:13:27.059 12:16:57 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:27.059 No valid GPT data, bailing 00:13:27.059 12:16:57 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:27.059 12:16:57 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:13:27.059 12:16:57 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:27.059 12:16:57 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:27.059 12:16:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:27.059 12:16:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.059 12:16:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.059 ************************************ 00:13:27.059 START TEST xnvme_rpc 00:13:27.059 ************************************ 00:13:27.059 12:16:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:27.059 12:16:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:27.059 12:16:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:27.059 12:16:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:27.059 12:16:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:27.059 12:16:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69248 00:13:27.059 12:16:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69248 00:13:27.059 12:16:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:27.059 12:16:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69248 ']' 00:13:27.059 12:16:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.059 12:16:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.059 12:16:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.059 12:16:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.059 12:16:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.059 [2024-12-05 12:16:57.899230] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:13:27.059 [2024-12-05 12:16:57.899492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69248 ] 00:13:27.319 [2024-12-05 12:16:58.054905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.319 [2024-12-05 12:16:58.162718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.262 xnvme_bdev 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:28.262 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69248 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69248 ']' 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69248 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.263 12:16:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69248 00:13:28.263 killing process with pid 69248 00:13:28.263 12:16:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:28.263 12:16:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:28.263 12:16:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69248' 00:13:28.263 12:16:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69248 00:13:28.263 12:16:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69248 00:13:30.182 ************************************ 00:13:30.182 END TEST xnvme_rpc 00:13:30.182 ************************************ 00:13:30.182 00:13:30.182 real 0m2.793s 00:13:30.182 user 0m2.821s 00:13:30.182 sys 0m0.420s 00:13:30.182 12:17:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.182 12:17:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.182 12:17:00 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:30.182 12:17:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:30.182 12:17:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.182 12:17:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.182 ************************************ 00:13:30.182 START TEST xnvme_bdevperf 00:13:30.182 ************************************ 00:13:30.182 12:17:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:30.182 12:17:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:30.182 12:17:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:30.182 12:17:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:30.182 12:17:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:30.182 12:17:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:30.182 12:17:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:30.182 12:17:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:30.182 { 00:13:30.182 "subsystems": [ 00:13:30.182 { 00:13:30.182 "subsystem": "bdev", 00:13:30.182 "config": [ 00:13:30.182 { 00:13:30.182 "params": { 00:13:30.182 "io_mechanism": "libaio", 00:13:30.182 "conserve_cpu": false, 00:13:30.182 "filename": "/dev/nvme0n1", 00:13:30.182 "name": "xnvme_bdev" 00:13:30.182 }, 00:13:30.182 "method": "bdev_xnvme_create" 00:13:30.182 }, 00:13:30.182 { 00:13:30.182 "method": "bdev_wait_for_examine" 00:13:30.182 } 00:13:30.182 ] 00:13:30.182 } 00:13:30.182 ] 00:13:30.182 } 00:13:30.182 [2024-12-05 12:17:00.763661] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:13:30.182 [2024-12-05 12:17:00.764361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69322 ] 00:13:30.182 [2024-12-05 12:17:00.936003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.182 [2024-12-05 12:17:01.043976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.504 Running I/O for 5 seconds... 00:13:32.833 32672.00 IOPS, 127.62 MiB/s [2024-12-05T12:17:04.644Z] 30283.50 IOPS, 118.29 MiB/s [2024-12-05T12:17:05.583Z] 28390.67 IOPS, 110.90 MiB/s [2024-12-05T12:17:06.521Z] 27396.00 IOPS, 107.02 MiB/s [2024-12-05T12:17:06.521Z] 27302.20 IOPS, 106.65 MiB/s 00:13:35.652 Latency(us) 00:13:35.652 [2024-12-05T12:17:06.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.652 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:35.652 xnvme_bdev : 5.01 27260.23 106.49 0.00 0.00 2342.81 463.16 8469.27 00:13:35.652 [2024-12-05T12:17:06.521Z] =================================================================================================================== 00:13:35.652 [2024-12-05T12:17:06.521Z] Total : 27260.23 106.49 0.00 0.00 2342.81 463.16 8469.27 00:13:36.592 12:17:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:36.592 12:17:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:36.592 12:17:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:36.592 12:17:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:36.592 12:17:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:36.592 { 00:13:36.592 "subsystems": [ 00:13:36.592 { 00:13:36.592 "subsystem": "bdev", 00:13:36.592 "config": [ 00:13:36.592 { 00:13:36.592 "params": { 00:13:36.592 "io_mechanism": "libaio", 00:13:36.592 "conserve_cpu": false, 00:13:36.592 "filename": "/dev/nvme0n1", 00:13:36.592 "name": "xnvme_bdev" 00:13:36.592 }, 00:13:36.592 "method": "bdev_xnvme_create" 00:13:36.592 }, 00:13:36.592 { 00:13:36.592 "method": "bdev_wait_for_examine" 00:13:36.592 } 00:13:36.592 ] 00:13:36.592 } 00:13:36.592 ] 00:13:36.592 } 00:13:36.592 [2024-12-05 12:17:07.322916] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:13:36.592 [2024-12-05 12:17:07.323082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69397 ] 00:13:36.853 [2024-12-05 12:17:07.491354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.853 [2024-12-05 12:17:07.642723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.425 Running I/O for 5 seconds... 00:13:39.307 32538.00 IOPS, 127.10 MiB/s [2024-12-05T12:17:11.119Z] 32406.00 IOPS, 126.59 MiB/s [2024-12-05T12:17:12.064Z] 32207.67 IOPS, 125.81 MiB/s [2024-12-05T12:17:13.006Z] 31819.25 IOPS, 124.29 MiB/s 00:13:42.137 Latency(us) 00:13:42.137 [2024-12-05T12:17:13.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.137 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:42.137 xnvme_bdev : 5.00 32080.41 125.31 0.00 0.00 1990.45 222.13 10788.23 00:13:42.137 [2024-12-05T12:17:13.006Z] =================================================================================================================== 00:13:42.137 [2024-12-05T12:17:13.006Z] Total : 32080.41 125.31 0.00 0.00 1990.45 222.13 10788.23 00:13:43.078 00:13:43.078 real 0m13.203s 00:13:43.078 user 0m4.935s 00:13:43.078 sys 0m6.622s 00:13:43.078 ************************************ 00:13:43.078 END TEST xnvme_bdevperf 00:13:43.078 ************************************ 00:13:43.078 12:17:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.078 12:17:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:43.338 12:17:13 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:43.338 12:17:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:43.338 12:17:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.338 12:17:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.338 ************************************ 00:13:43.338 START TEST xnvme_fio_plugin 00:13:43.338 ************************************ 00:13:43.338 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:43.338 12:17:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:43.339 12:17:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.339 { 00:13:43.339 "subsystems": [ 00:13:43.339 { 00:13:43.339 "subsystem": "bdev", 00:13:43.339 "config": [ 00:13:43.339 { 00:13:43.339 "params": { 00:13:43.339 "io_mechanism": "libaio", 00:13:43.339 "conserve_cpu": false, 00:13:43.339 "filename": "/dev/nvme0n1", 00:13:43.339 "name": "xnvme_bdev" 00:13:43.339 }, 00:13:43.339 "method": "bdev_xnvme_create" 00:13:43.339 }, 00:13:43.339 { 00:13:43.339 "method": "bdev_wait_for_examine" 00:13:43.339 } 00:13:43.339 ] 00:13:43.339 } 00:13:43.339 ] 00:13:43.339 } 00:13:43.339 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:43.339 fio-3.35 00:13:43.339 Starting 1 thread 00:13:49.943 00:13:49.943 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69523: Thu Dec 5 12:17:19 2024 00:13:49.943 read: IOPS=30.6k, BW=120MiB/s (125MB/s)(599MiB/5002msec) 00:13:49.943 slat (usec): min=4, max=1807, avg=25.79, stdev=107.29 00:13:49.943 clat (usec): min=106, max=4593, avg=1407.14, stdev=526.27 00:13:49.943 lat (usec): min=181, max=4775, avg=1432.94, stdev=513.95 00:13:49.943 clat percentiles (usec): 00:13:49.943 | 1.00th=[ 289], 5.00th=[ 603], 10.00th=[ 750], 20.00th=[ 979], 00:13:49.943 | 30.00th=[ 1139], 40.00th=[ 1254], 50.00th=[ 1385], 60.00th=[ 1516], 00:13:49.943 | 70.00th=[ 1647], 80.00th=[ 1795], 90.00th=[ 2057], 95.00th=[ 2311], 00:13:49.943 | 99.00th=[ 2900], 99.50th=[ 3163], 99.90th=[ 3720], 99.95th=[ 3916], 00:13:49.943 | 99.99th=[ 4424] 00:13:49.943 bw ( KiB/s): min=118496, max=132624, per=100.00%, avg=122728.00, stdev=4393.84, samples=9 00:13:49.943 iops : min=29624, max=33156, avg=30682.00, stdev=1098.46, samples=9 00:13:49.943 lat (usec) : 250=0.62%, 500=2.29%, 750=7.09%, 1000=11.33% 00:13:49.943 lat (msec) : 2=67.26%, 4=11.37%, 10=0.04% 00:13:49.943 cpu : usr=33.85%, sys=57.43%, ctx=9, majf=0, minf=764 00:13:49.943 IO depths : 1=0.4%, 2=0.9%, 4=2.5%, 8=7.7%, 16=23.6%, 32=62.9%, >=64=2.1% 00:13:49.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.943 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:13:49.943 issued rwts: total=153227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.943 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:49.943 00:13:49.943 Run status group 0 (all jobs): 00:13:49.943 READ: bw=120MiB/s (125MB/s), 120MiB/s-120MiB/s (125MB/s-125MB/s), io=599MiB (628MB), run=5002-5002msec 00:13:50.205 ----------------------------------------------------- 00:13:50.205 Suppressions used: 00:13:50.205 count bytes template 00:13:50.205 1 11 /usr/src/fio/parse.c 00:13:50.205 1 8 libtcmalloc_minimal.so 00:13:50.205 1 904 libcrypto.so 00:13:50.205 ----------------------------------------------------- 00:13:50.205 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:50.467 12:17:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.468 { 00:13:50.468 "subsystems": [ 00:13:50.468 { 00:13:50.468 "subsystem": "bdev", 00:13:50.468 "config": [ 00:13:50.468 { 00:13:50.468 "params": { 00:13:50.468 "io_mechanism": "libaio", 00:13:50.468 "conserve_cpu": false, 00:13:50.468 "filename": "/dev/nvme0n1", 00:13:50.468 "name": "xnvme_bdev" 00:13:50.468 }, 00:13:50.468 "method": "bdev_xnvme_create" 00:13:50.468 }, 00:13:50.468 { 00:13:50.468 "method": "bdev_wait_for_examine" 00:13:50.468 } 00:13:50.468 ] 00:13:50.468 } 00:13:50.468 ] 00:13:50.468 } 00:13:50.468 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:50.468 fio-3.35 00:13:50.468 Starting 1 thread 00:13:57.053 00:13:57.053 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69615: Thu Dec 5 12:17:27 2024 00:13:57.053 write: IOPS=35.6k, BW=139MiB/s (146MB/s)(695MiB/5001msec); 0 zone resets 00:13:57.053 slat (usec): min=4, max=1733, avg=22.79, stdev=76.62 00:13:57.053 clat (usec): min=106, max=5320, avg=1172.57, stdev=553.87 00:13:57.053 lat (usec): min=185, max=5348, avg=1195.36, stdev=549.17 00:13:57.053 clat percentiles (usec): 00:13:57.053 | 1.00th=[ 253], 5.00th=[ 404], 10.00th=[ 529], 20.00th=[ 709], 00:13:57.053 | 30.00th=[ 857], 40.00th=[ 979], 50.00th=[ 1106], 60.00th=[ 1237], 00:13:57.053 | 70.00th=[ 1385], 80.00th=[ 1582], 90.00th=[ 1876], 95.00th=[ 2147], 00:13:57.053 | 99.00th=[ 2933], 99.50th=[ 3228], 99.90th=[ 3916], 99.95th=[ 4146], 00:13:57.053 | 99.99th=[ 4621] 00:13:57.053 bw ( KiB/s): min=130067, max=153512, per=100.00%, avg=143354.78, stdev=7385.46, samples=9 00:13:57.053 iops : min=32516, max=38378, avg=35838.56, stdev=1846.56, samples=9 00:13:57.053 lat (usec) : 250=0.95%, 500=7.75%, 750=14.18%, 1000=18.61% 00:13:57.053 lat (msec) : 2=51.29%, 4=7.14%, 10=0.08% 00:13:57.053 cpu : usr=31.10%, sys=55.26%, ctx=74, majf=0, minf=765 00:13:57.053 IO depths : 1=0.2%, 2=0.8%, 4=2.5%, 8=8.0%, 16=24.4%, 32=62.1%, >=64=2.0% 00:13:57.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.053 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:57.053 issued rwts: total=0,177889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.053 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:57.053 00:13:57.053 Run status group 0 (all jobs): 00:13:57.053 WRITE: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=695MiB (729MB), run=5001-5001msec 00:13:57.315 ----------------------------------------------------- 00:13:57.315 Suppressions used: 00:13:57.315 count bytes template 00:13:57.315 1 11 /usr/src/fio/parse.c 00:13:57.315 1 8 libtcmalloc_minimal.so 00:13:57.315 1 904 libcrypto.so 00:13:57.315 ----------------------------------------------------- 00:13:57.315 00:13:57.577 ************************************ 00:13:57.577 END TEST xnvme_fio_plugin 00:13:57.577 ************************************ 00:13:57.577 00:13:57.577 real 0m14.244s 00:13:57.577 user 0m6.292s 00:13:57.577 sys 0m6.430s 00:13:57.577 12:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.577 12:17:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:57.577 12:17:28 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:57.577 12:17:28 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:57.577 12:17:28 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:57.577 12:17:28 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:57.577 12:17:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.577 12:17:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.577 12:17:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.577 ************************************ 00:13:57.577 START TEST xnvme_rpc 00:13:57.577 ************************************ 00:13:57.577 12:17:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:57.577 12:17:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:57.577 12:17:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:57.577 12:17:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:57.577 12:17:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:57.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.577 12:17:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69701 00:13:57.577 12:17:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69701 00:13:57.577 12:17:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69701 ']' 00:13:57.577 12:17:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.577 12:17:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.577 12:17:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.577 12:17:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.577 12:17:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.577 12:17:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:57.577 [2024-12-05 12:17:28.380375] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:13:57.577 [2024-12-05 12:17:28.380557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69701 ] 00:13:57.838 [2024-12-05 12:17:28.546272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.838 [2024-12-05 12:17:28.697909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.783 xnvme_bdev 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:58.783 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69701 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69701 ']' 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69701 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69701 00:13:59.046 killing process with pid 69701 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69701' 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69701 00:13:59.046 12:17:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69701 00:14:00.961 00:14:00.961 real 0m3.262s 00:14:00.961 user 0m3.134s 00:14:00.961 sys 0m0.607s 00:14:00.961 ************************************ 00:14:00.961 END TEST xnvme_rpc 00:14:00.961 ************************************ 00:14:00.961 12:17:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.961 12:17:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.961 12:17:31 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:00.961 12:17:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:00.961 12:17:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.961 12:17:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.961 ************************************ 00:14:00.961 START TEST xnvme_bdevperf 00:14:00.961 ************************************ 00:14:00.961 12:17:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:00.961 12:17:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:00.961 12:17:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:00.961 12:17:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:00.961 12:17:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:00.961 12:17:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:00.961 12:17:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:00.961 12:17:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:00.961 { 00:14:00.961 "subsystems": [ 00:14:00.961 { 00:14:00.961 "subsystem": "bdev", 00:14:00.961 "config": [ 00:14:00.961 { 00:14:00.961 "params": { 00:14:00.961 "io_mechanism": "libaio", 00:14:00.961 "conserve_cpu": true, 00:14:00.961 "filename": "/dev/nvme0n1", 00:14:00.961 "name": "xnvme_bdev" 00:14:00.961 }, 00:14:00.961 "method": "bdev_xnvme_create" 00:14:00.961 }, 00:14:00.961 { 00:14:00.961 "method": "bdev_wait_for_examine" 00:14:00.961 } 00:14:00.961 ] 00:14:00.961 } 00:14:00.961 ] 00:14:00.961 } 00:14:00.961 [2024-12-05 12:17:31.698688] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:00.961 [2024-12-05 12:17:31.698859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69775 ] 00:14:01.222 [2024-12-05 12:17:31.861281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.222 [2024-12-05 12:17:32.019679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.835 Running I/O for 5 seconds... 00:14:03.716 29693.00 IOPS, 115.99 MiB/s [2024-12-05T12:17:35.525Z] 30325.00 IOPS, 118.46 MiB/s [2024-12-05T12:17:36.466Z] 30327.33 IOPS, 118.47 MiB/s [2024-12-05T12:17:37.398Z] 30322.25 IOPS, 118.45 MiB/s 00:14:06.529 Latency(us) 00:14:06.529 [2024-12-05T12:17:37.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.530 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:06.530 xnvme_bdev : 5.00 32247.85 125.97 0.00 0.00 1979.84 237.88 10435.35 00:14:06.530 [2024-12-05T12:17:37.399Z] =================================================================================================================== 00:14:06.530 [2024-12-05T12:17:37.399Z] Total : 32247.85 125.97 0.00 0.00 1979.84 237.88 10435.35 00:14:07.464 12:17:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.464 12:17:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:07.464 12:17:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:07.464 12:17:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:07.464 12:17:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.464 { 00:14:07.464 "subsystems": [ 00:14:07.464 { 00:14:07.464 "subsystem": "bdev", 00:14:07.464 "config": [ 00:14:07.464 { 00:14:07.464 "params": { 00:14:07.464 "io_mechanism": "libaio", 00:14:07.464 "conserve_cpu": true, 00:14:07.464 "filename": "/dev/nvme0n1", 00:14:07.464 "name": "xnvme_bdev" 00:14:07.464 }, 00:14:07.464 "method": "bdev_xnvme_create" 00:14:07.464 }, 00:14:07.464 { 00:14:07.464 "method": "bdev_wait_for_examine" 00:14:07.464 } 00:14:07.464 ] 00:14:07.464 } 00:14:07.464 ] 00:14:07.464 } 00:14:07.464 [2024-12-05 12:17:38.210450] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:07.464 [2024-12-05 12:17:38.210726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69856 ] 00:14:07.725 [2024-12-05 12:17:38.373674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.725 [2024-12-05 12:17:38.523905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.299 Running I/O for 5 seconds... 00:14:10.183 32225.00 IOPS, 125.88 MiB/s [2024-12-05T12:17:41.990Z] 31719.50 IOPS, 123.90 MiB/s [2024-12-05T12:17:42.934Z] 34761.00 IOPS, 135.79 MiB/s [2024-12-05T12:17:44.318Z] 34768.00 IOPS, 135.81 MiB/s 00:14:13.449 Latency(us) 00:14:13.449 [2024-12-05T12:17:44.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.449 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:13.449 xnvme_bdev : 5.00 33999.91 132.81 0.00 0.00 1877.77 195.35 7410.61 00:14:13.449 [2024-12-05T12:17:44.318Z] =================================================================================================================== 00:14:13.449 [2024-12-05T12:17:44.318Z] Total : 33999.91 132.81 0.00 0.00 1877.77 195.35 7410.61 00:14:13.709 00:14:13.709 real 0m12.899s 00:14:13.709 user 0m4.889s 00:14:13.709 sys 0m6.422s 00:14:13.709 ************************************ 00:14:13.709 END TEST xnvme_bdevperf 00:14:13.709 ************************************ 00:14:13.709 12:17:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.709 12:17:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:13.709 12:17:44 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:13.709 12:17:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:13.709 12:17:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.709 12:17:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:13.972 ************************************ 00:14:13.972 START TEST xnvme_fio_plugin 00:14:13.972 ************************************ 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:13.972 12:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:13.972 { 00:14:13.972 "subsystems": [ 00:14:13.972 { 00:14:13.972 "subsystem": "bdev", 00:14:13.972 "config": [ 00:14:13.972 { 00:14:13.972 "params": { 00:14:13.972 "io_mechanism": "libaio", 00:14:13.972 "conserve_cpu": true, 00:14:13.972 "filename": "/dev/nvme0n1", 00:14:13.972 "name": "xnvme_bdev" 00:14:13.972 }, 00:14:13.972 "method": "bdev_xnvme_create" 00:14:13.972 }, 00:14:13.972 { 00:14:13.972 "method": "bdev_wait_for_examine" 00:14:13.972 } 00:14:13.972 ] 00:14:13.972 } 00:14:13.972 ] 00:14:13.972 } 00:14:13.972 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:13.972 fio-3.35 00:14:13.972 Starting 1 thread 00:14:20.555 00:14:20.555 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69964: Thu Dec 5 12:17:50 2024 00:14:20.555 read: IOPS=39.6k, BW=155MiB/s (162MB/s)(774MiB/5001msec) 00:14:20.555 slat (usec): min=4, max=2024, avg=19.21, stdev=69.40 00:14:20.555 clat (usec): min=31, max=4846, avg=1073.42, stdev=533.73 00:14:20.555 lat (usec): min=165, max=4939, avg=1092.63, stdev=529.63 00:14:20.555 clat percentiles (usec): 00:14:20.555 | 1.00th=[ 210], 5.00th=[ 322], 10.00th=[ 433], 20.00th=[ 603], 00:14:20.555 | 30.00th=[ 742], 40.00th=[ 881], 50.00th=[ 1020], 60.00th=[ 1156], 00:14:20.555 | 70.00th=[ 1303], 80.00th=[ 1500], 90.00th=[ 1745], 95.00th=[ 1991], 00:14:20.555 | 99.00th=[ 2671], 99.50th=[ 3032], 99.90th=[ 3621], 99.95th=[ 3851], 00:14:20.555 | 99.99th=[ 4293] 00:14:20.555 bw ( KiB/s): min=126928, max=189592, per=98.38%, avg=155933.44, stdev=23339.39, samples=9 00:14:20.555 iops : min=31732, max=47398, avg=38983.33, stdev=5834.85, samples=9 00:14:20.555 lat (usec) : 50=0.01%, 250=2.10%, 500=11.43%, 750=16.93%, 1000=17.98% 00:14:20.555 lat (msec) : 2=46.67%, 4=4.86%, 10=0.03% 00:14:20.555 cpu : usr=37.24%, sys=53.28%, ctx=13, majf=0, minf=764 00:14:20.555 IO depths : 1=0.3%, 2=1.0%, 4=3.1%, 8=8.7%, 16=24.2%, 32=60.7%, >=64=2.0% 00:14:20.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.555 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:14:20.555 issued rwts: total=198156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:20.555 00:14:20.555 Run status group 0 (all jobs): 00:14:20.555 READ: bw=155MiB/s (162MB/s), 155MiB/s-155MiB/s (162MB/s-162MB/s), io=774MiB (812MB), run=5001-5001msec 00:14:20.814 ----------------------------------------------------- 00:14:20.814 Suppressions used: 00:14:20.814 count bytes template 00:14:20.814 1 11 /usr/src/fio/parse.c 00:14:20.814 1 8 libtcmalloc_minimal.so 00:14:20.814 1 904 libcrypto.so 00:14:20.814 ----------------------------------------------------- 00:14:20.814 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:20.814 12:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:20.814 { 00:14:20.814 "subsystems": [ 00:14:20.814 { 00:14:20.814 "subsystem": "bdev", 00:14:20.814 "config": [ 00:14:20.814 { 00:14:20.814 "params": { 00:14:20.814 "io_mechanism": "libaio", 00:14:20.814 "conserve_cpu": true, 00:14:20.814 "filename": "/dev/nvme0n1", 00:14:20.814 "name": "xnvme_bdev" 00:14:20.814 }, 00:14:20.814 "method": "bdev_xnvme_create" 00:14:20.814 }, 00:14:20.814 { 00:14:20.814 "method": "bdev_wait_for_examine" 00:14:20.814 } 00:14:20.814 ] 00:14:20.814 } 00:14:20.814 ] 00:14:20.814 } 00:14:21.075 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:21.075 fio-3.35 00:14:21.075 Starting 1 thread 00:14:27.652 00:14:27.652 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70061: Thu Dec 5 12:17:57 2024 00:14:27.652 write: IOPS=41.1k, BW=161MiB/s (169MB/s)(804MiB/5001msec); 0 zone resets 00:14:27.652 slat (usec): min=3, max=1793, avg=20.10, stdev=53.81 00:14:27.652 clat (usec): min=96, max=6187, avg=971.05, stdev=527.64 00:14:27.652 lat (usec): min=146, max=6221, avg=991.15, stdev=526.92 00:14:27.652 clat percentiles (usec): 00:14:27.652 | 1.00th=[ 194], 5.00th=[ 289], 10.00th=[ 383], 20.00th=[ 537], 00:14:27.652 | 30.00th=[ 668], 40.00th=[ 791], 50.00th=[ 914], 60.00th=[ 1029], 00:14:27.652 | 70.00th=[ 1156], 80.00th=[ 1303], 90.00th=[ 1565], 95.00th=[ 1909], 00:14:27.652 | 99.00th=[ 2835], 99.50th=[ 3195], 99.90th=[ 3949], 99.95th=[ 4178], 00:14:27.652 | 99.99th=[ 4883] 00:14:27.652 bw ( KiB/s): min=130640, max=198624, per=99.55%, avg=163840.89, stdev=23035.87, samples=9 00:14:27.652 iops : min=32660, max=49656, avg=40960.22, stdev=5758.97, samples=9 00:14:27.652 lat (usec) : 100=0.01%, 250=2.93%, 500=14.63%, 750=19.40%, 1000=20.75% 00:14:27.652 lat (msec) : 2=38.14%, 4=4.07%, 10=0.08% 00:14:27.652 cpu : usr=32.28%, sys=55.40%, ctx=20, majf=0, minf=765 00:14:27.652 IO depths : 1=0.2%, 2=0.9%, 4=3.2%, 8=9.2%, 16=25.5%, 32=59.1%, >=64=1.9% 00:14:27.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.652 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:14:27.652 issued rwts: total=0,205761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.652 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.652 00:14:27.652 Run status group 0 (all jobs): 00:14:27.652 WRITE: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=804MiB (843MB), run=5001-5001msec 00:14:27.652 ----------------------------------------------------- 00:14:27.652 Suppressions used: 00:14:27.652 count bytes template 00:14:27.652 1 11 /usr/src/fio/parse.c 00:14:27.652 1 8 libtcmalloc_minimal.so 00:14:27.652 1 904 libcrypto.so 00:14:27.652 ----------------------------------------------------- 00:14:27.652 00:14:27.652 ************************************ 00:14:27.652 END TEST xnvme_fio_plugin 00:14:27.652 00:14:27.652 real 0m13.876s 00:14:27.652 user 0m6.353s 00:14:27.652 sys 0m6.042s 00:14:27.652 12:17:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.652 12:17:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:27.652 ************************************ 00:14:27.652 12:17:58 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:27.652 12:17:58 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:27.652 12:17:58 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:27.652 12:17:58 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:27.652 12:17:58 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:27.652 12:17:58 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:27.652 12:17:58 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:27.652 12:17:58 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:27.652 12:17:58 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:27.652 12:17:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:27.652 12:17:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.652 12:17:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.912 ************************************ 00:14:27.912 START TEST xnvme_rpc 00:14:27.912 ************************************ 00:14:27.912 12:17:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:27.912 12:17:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:27.912 12:17:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:27.912 12:17:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:27.912 12:17:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:27.912 12:17:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70148 00:14:27.912 12:17:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70148 00:14:27.912 12:17:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70148 ']' 00:14:27.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.912 12:17:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.912 12:17:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.912 12:17:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.912 12:17:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.912 12:17:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.912 12:17:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:27.912 [2024-12-05 12:17:58.634237] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:27.912 [2024-12-05 12:17:58.634699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70148 ] 00:14:28.173 [2024-12-05 12:17:58.802259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.173 [2024-12-05 12:17:58.959441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.113 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.113 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:29.113 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.114 xnvme_bdev 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70148 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70148 ']' 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70148 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70148 00:14:29.114 killing process with pid 70148 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70148' 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70148 00:14:29.114 12:17:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70148 00:14:31.028 00:14:31.028 real 0m3.275s 00:14:31.028 user 0m3.175s 00:14:31.028 sys 0m0.605s 00:14:31.028 ************************************ 00:14:31.028 END TEST xnvme_rpc 00:14:31.028 ************************************ 00:14:31.028 12:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.028 12:18:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.028 12:18:01 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:31.028 12:18:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:31.028 12:18:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.028 12:18:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.028 ************************************ 00:14:31.028 START TEST xnvme_bdevperf 00:14:31.028 ************************************ 00:14:31.028 12:18:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:31.028 12:18:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:31.028 12:18:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:31.028 12:18:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:31.028 12:18:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:31.028 12:18:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:31.028 12:18:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:31.028 12:18:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:31.288 { 00:14:31.288 "subsystems": [ 00:14:31.288 { 00:14:31.288 "subsystem": "bdev", 00:14:31.288 "config": [ 00:14:31.288 { 00:14:31.288 "params": { 00:14:31.288 "io_mechanism": "io_uring", 00:14:31.288 "conserve_cpu": false, 00:14:31.288 "filename": "/dev/nvme0n1", 00:14:31.288 "name": "xnvme_bdev" 00:14:31.288 }, 00:14:31.288 "method": "bdev_xnvme_create" 00:14:31.288 }, 00:14:31.288 { 00:14:31.288 "method": "bdev_wait_for_examine" 00:14:31.288 } 00:14:31.288 ] 00:14:31.288 } 00:14:31.288 ] 00:14:31.288 } 00:14:31.288 [2024-12-05 12:18:01.965522] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:31.288 [2024-12-05 12:18:01.965922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70228 ] 00:14:31.288 [2024-12-05 12:18:02.136113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.548 [2024-12-05 12:18:02.293292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.807 Running I/O for 5 seconds... 00:14:34.133 31659.00 IOPS, 123.67 MiB/s [2024-12-05T12:18:05.945Z] 32036.00 IOPS, 125.14 MiB/s [2024-12-05T12:18:06.887Z] 32314.33 IOPS, 126.23 MiB/s [2024-12-05T12:18:07.829Z] 32321.75 IOPS, 126.26 MiB/s 00:14:36.960 Latency(us) 00:14:36.960 [2024-12-05T12:18:07.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.960 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:36.960 xnvme_bdev : 5.00 32326.98 126.28 0.00 0.00 1975.83 395.42 10334.52 00:14:36.960 [2024-12-05T12:18:07.829Z] =================================================================================================================== 00:14:36.960 [2024-12-05T12:18:07.829Z] Total : 32326.98 126.28 0.00 0.00 1975.83 395.42 10334.52 00:14:37.906 12:18:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:37.906 12:18:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:37.906 12:18:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:37.906 12:18:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:37.906 12:18:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:37.906 { 00:14:37.906 "subsystems": [ 00:14:37.906 { 00:14:37.906 "subsystem": "bdev", 00:14:37.906 "config": [ 00:14:37.906 { 00:14:37.906 "params": { 00:14:37.906 "io_mechanism": "io_uring", 00:14:37.906 "conserve_cpu": false, 00:14:37.906 "filename": "/dev/nvme0n1", 00:14:37.906 "name": "xnvme_bdev" 00:14:37.906 }, 00:14:37.906 "method": "bdev_xnvme_create" 00:14:37.906 }, 00:14:37.906 { 00:14:37.906 "method": "bdev_wait_for_examine" 00:14:37.906 } 00:14:37.906 ] 00:14:37.906 } 00:14:37.906 ] 00:14:37.906 } 00:14:37.906 [2024-12-05 12:18:08.592577] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:37.906 [2024-12-05 12:18:08.592750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70303 ] 00:14:37.906 [2024-12-05 12:18:08.760768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.168 [2024-12-05 12:18:08.906085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.430 Running I/O for 5 seconds... 00:14:40.801 33773.00 IOPS, 131.93 MiB/s [2024-12-05T12:18:12.243Z] 33877.00 IOPS, 132.33 MiB/s [2024-12-05T12:18:13.625Z] 34033.00 IOPS, 132.94 MiB/s [2024-12-05T12:18:14.567Z] 34229.25 IOPS, 133.71 MiB/s 00:14:43.698 Latency(us) 00:14:43.698 [2024-12-05T12:18:14.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.698 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:43.698 xnvme_bdev : 5.00 34064.42 133.06 0.00 0.00 1874.94 341.86 5268.09 00:14:43.698 [2024-12-05T12:18:14.567Z] =================================================================================================================== 00:14:43.698 [2024-12-05T12:18:14.567Z] Total : 34064.42 133.06 0.00 0.00 1874.94 341.86 5268.09 00:14:44.271 00:14:44.271 real 0m13.242s 00:14:44.271 user 0m6.283s 00:14:44.271 sys 0m6.680s 00:14:44.271 ************************************ 00:14:44.271 END TEST xnvme_bdevperf 00:14:44.271 ************************************ 00:14:44.271 12:18:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.271 12:18:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:44.531 12:18:15 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:44.531 12:18:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:44.531 12:18:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.531 12:18:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:44.531 ************************************ 00:14:44.531 START TEST xnvme_fio_plugin 00:14:44.531 ************************************ 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:44.531 12:18:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.531 { 00:14:44.531 "subsystems": [ 00:14:44.531 { 00:14:44.531 "subsystem": "bdev", 00:14:44.531 "config": [ 00:14:44.531 { 00:14:44.531 "params": { 00:14:44.531 "io_mechanism": "io_uring", 00:14:44.531 "conserve_cpu": false, 00:14:44.531 "filename": "/dev/nvme0n1", 00:14:44.531 "name": "xnvme_bdev" 00:14:44.531 }, 00:14:44.531 "method": "bdev_xnvme_create" 00:14:44.531 }, 00:14:44.531 { 00:14:44.531 "method": "bdev_wait_for_examine" 00:14:44.531 } 00:14:44.531 ] 00:14:44.531 } 00:14:44.531 ] 00:14:44.531 } 00:14:44.792 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:44.792 fio-3.35 00:14:44.792 Starting 1 thread 00:14:51.359 00:14:51.359 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70417: Thu Dec 5 12:18:21 2024 00:14:51.359 read: IOPS=45.8k, BW=179MiB/s (188MB/s)(895MiB/5001msec) 00:14:51.359 slat (usec): min=2, max=244, avg= 3.58, stdev= 1.62 00:14:51.359 clat (usec): min=717, max=4893, avg=1259.00, stdev=269.52 00:14:51.359 lat (usec): min=720, max=4896, avg=1262.58, stdev=269.54 00:14:51.359 clat percentiles (usec): 00:14:51.359 | 1.00th=[ 865], 5.00th=[ 922], 10.00th=[ 979], 20.00th=[ 1057], 00:14:51.359 | 30.00th=[ 1106], 40.00th=[ 1156], 50.00th=[ 1205], 60.00th=[ 1270], 00:14:51.359 | 70.00th=[ 1336], 80.00th=[ 1434], 90.00th=[ 1582], 95.00th=[ 1762], 00:14:51.359 | 99.00th=[ 2114], 99.50th=[ 2311], 99.90th=[ 3064], 99.95th=[ 3294], 00:14:51.359 | 99.99th=[ 4146] 00:14:51.359 bw ( KiB/s): min=144896, max=208896, per=100.00%, avg=183352.89, stdev=19722.71, samples=9 00:14:51.359 iops : min=36224, max=52224, avg=45838.22, stdev=4930.68, samples=9 00:14:51.359 lat (usec) : 750=0.01%, 1000=12.71% 00:14:51.359 lat (msec) : 2=85.49%, 4=1.78%, 10=0.02% 00:14:51.359 cpu : usr=34.58%, sys=64.34%, ctx=43, majf=0, minf=762 00:14:51.359 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:51.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.359 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:51.359 issued rwts: total=229096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.359 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:51.359 00:14:51.359 Run status group 0 (all jobs): 00:14:51.359 READ: bw=179MiB/s (188MB/s), 179MiB/s-179MiB/s (188MB/s-188MB/s), io=895MiB (938MB), run=5001-5001msec 00:14:51.621 ----------------------------------------------------- 00:14:51.621 Suppressions used: 00:14:51.621 count bytes template 00:14:51.621 1 11 /usr/src/fio/parse.c 00:14:51.621 1 8 libtcmalloc_minimal.so 00:14:51.622 1 904 libcrypto.so 00:14:51.622 ----------------------------------------------------- 00:14:51.622 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:51.622 12:18:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.622 { 00:14:51.622 "subsystems": [ 00:14:51.622 { 00:14:51.622 "subsystem": "bdev", 00:14:51.622 "config": [ 00:14:51.622 { 00:14:51.622 "params": { 00:14:51.622 "io_mechanism": "io_uring", 00:14:51.622 "conserve_cpu": false, 00:14:51.622 "filename": "/dev/nvme0n1", 00:14:51.622 "name": "xnvme_bdev" 00:14:51.622 }, 00:14:51.622 "method": "bdev_xnvme_create" 00:14:51.622 }, 00:14:51.622 { 00:14:51.622 "method": "bdev_wait_for_examine" 00:14:51.622 } 00:14:51.622 ] 00:14:51.622 } 00:14:51.622 ] 00:14:51.622 } 00:14:51.622 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:51.622 fio-3.35 00:14:51.622 Starting 1 thread 00:14:58.202 00:14:58.202 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70514: Thu Dec 5 12:18:28 2024 00:14:58.202 write: IOPS=47.2k, BW=184MiB/s (193MB/s)(921MiB/5001msec); 0 zone resets 00:14:58.203 slat (usec): min=2, max=105, avg= 4.14, stdev= 1.60 00:14:58.203 clat (usec): min=168, max=3635, avg=1196.73, stdev=282.76 00:14:58.203 lat (usec): min=172, max=3678, avg=1200.87, stdev=283.04 00:14:58.203 clat percentiles (usec): 00:14:58.203 | 1.00th=[ 717], 5.00th=[ 775], 10.00th=[ 816], 20.00th=[ 906], 00:14:58.203 | 30.00th=[ 996], 40.00th=[ 1139], 50.00th=[ 1221], 60.00th=[ 1287], 00:14:58.203 | 70.00th=[ 1352], 80.00th=[ 1434], 90.00th=[ 1549], 95.00th=[ 1663], 00:14:58.203 | 99.00th=[ 1909], 99.50th=[ 2008], 99.90th=[ 2180], 99.95th=[ 2278], 00:14:58.203 | 99.99th=[ 3425] 00:14:58.203 bw ( KiB/s): min=162304, max=245248, per=98.10%, avg=185044.44, stdev=31417.35, samples=9 00:14:58.203 iops : min=40576, max=61312, avg=46261.11, stdev=7854.34, samples=9 00:14:58.203 lat (usec) : 250=0.01%, 750=3.12%, 1000=27.17% 00:14:58.203 lat (msec) : 2=69.17%, 4=0.54% 00:14:58.203 cpu : usr=37.58%, sys=61.34%, ctx=44, majf=0, minf=763 00:14:58.203 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:58.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.203 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:58.203 issued rwts: total=0,235823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.203 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:58.203 00:14:58.203 Run status group 0 (all jobs): 00:14:58.203 WRITE: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=921MiB (966MB), run=5001-5001msec 00:14:58.459 ----------------------------------------------------- 00:14:58.459 Suppressions used: 00:14:58.459 count bytes template 00:14:58.459 1 11 /usr/src/fio/parse.c 00:14:58.459 1 8 libtcmalloc_minimal.so 00:14:58.459 1 904 libcrypto.so 00:14:58.459 ----------------------------------------------------- 00:14:58.459 00:14:58.459 00:14:58.459 real 0m14.003s 00:14:58.459 user 0m6.510s 00:14:58.459 sys 0m7.028s 00:14:58.459 12:18:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.459 12:18:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:58.459 ************************************ 00:14:58.459 END TEST xnvme_fio_plugin 00:14:58.459 ************************************ 00:14:58.459 12:18:29 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:58.459 12:18:29 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:58.459 12:18:29 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:58.459 12:18:29 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:58.459 12:18:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:58.459 12:18:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.459 12:18:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.459 ************************************ 00:14:58.459 START TEST xnvme_rpc 00:14:58.459 ************************************ 00:14:58.459 12:18:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:58.459 12:18:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:58.459 12:18:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:58.459 12:18:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:58.459 12:18:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:58.459 12:18:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70595 00:14:58.459 12:18:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70595 00:14:58.459 12:18:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70595 ']' 00:14:58.459 12:18:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.459 12:18:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.459 12:18:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.459 12:18:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:58.459 12:18:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.459 12:18:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.459 [2024-12-05 12:18:29.312223] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:58.460 [2024-12-05 12:18:29.312494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70595 ] 00:14:58.717 [2024-12-05 12:18:29.470185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.717 [2024-12-05 12:18:29.580359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.653 xnvme_bdev 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70595 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70595 ']' 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70595 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70595 00:14:59.653 killing process with pid 70595 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70595' 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70595 00:14:59.653 12:18:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70595 00:15:01.583 00:15:01.583 real 0m2.826s 00:15:01.583 user 0m2.876s 00:15:01.583 sys 0m0.407s 00:15:01.583 ************************************ 00:15:01.583 END TEST xnvme_rpc 00:15:01.583 ************************************ 00:15:01.583 12:18:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.583 12:18:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.583 12:18:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:01.583 12:18:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:01.583 12:18:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.583 12:18:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.583 ************************************ 00:15:01.583 START TEST xnvme_bdevperf 00:15:01.583 ************************************ 00:15:01.583 12:18:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:01.583 12:18:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:01.583 12:18:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:01.584 12:18:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:01.584 12:18:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:01.584 12:18:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:01.584 12:18:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:01.584 12:18:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:01.584 { 00:15:01.584 "subsystems": [ 00:15:01.584 { 00:15:01.584 "subsystem": "bdev", 00:15:01.584 "config": [ 00:15:01.584 { 00:15:01.584 "params": { 00:15:01.584 "io_mechanism": "io_uring", 00:15:01.584 "conserve_cpu": true, 00:15:01.584 "filename": "/dev/nvme0n1", 00:15:01.584 "name": "xnvme_bdev" 00:15:01.584 }, 00:15:01.584 "method": "bdev_xnvme_create" 00:15:01.584 }, 00:15:01.584 { 00:15:01.584 "method": "bdev_wait_for_examine" 00:15:01.584 } 00:15:01.584 ] 00:15:01.584 } 00:15:01.584 ] 00:15:01.584 } 00:15:01.584 [2024-12-05 12:18:32.184753] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:01.584 [2024-12-05 12:18:32.185024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70669 ] 00:15:01.584 [2024-12-05 12:18:32.347477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.584 [2024-12-05 12:18:32.435276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.842 Running I/O for 5 seconds... 00:15:03.842 41603.00 IOPS, 162.51 MiB/s [2024-12-05T12:18:35.662Z] 38075.00 IOPS, 148.73 MiB/s [2024-12-05T12:18:37.049Z] 37177.33 IOPS, 145.22 MiB/s [2024-12-05T12:18:37.992Z] 36165.25 IOPS, 141.27 MiB/s 00:15:07.123 Latency(us) 00:15:07.123 [2024-12-05T12:18:37.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.123 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:07.123 xnvme_bdev : 5.00 36191.42 141.37 0.00 0.00 1764.22 671.11 11544.42 00:15:07.123 [2024-12-05T12:18:37.992Z] =================================================================================================================== 00:15:07.123 [2024-12-05T12:18:37.992Z] Total : 36191.42 141.37 0.00 0.00 1764.22 671.11 11544.42 00:15:07.694 12:18:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:07.694 12:18:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:07.694 12:18:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:07.694 12:18:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:07.694 12:18:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:07.694 { 00:15:07.694 "subsystems": [ 00:15:07.694 { 00:15:07.694 "subsystem": "bdev", 00:15:07.694 "config": [ 00:15:07.694 { 00:15:07.694 "params": { 00:15:07.694 "io_mechanism": "io_uring", 00:15:07.694 "conserve_cpu": true, 00:15:07.694 "filename": "/dev/nvme0n1", 00:15:07.694 "name": "xnvme_bdev" 00:15:07.694 }, 00:15:07.694 "method": "bdev_xnvme_create" 00:15:07.694 }, 00:15:07.694 { 00:15:07.694 "method": "bdev_wait_for_examine" 00:15:07.694 } 00:15:07.694 ] 00:15:07.694 } 00:15:07.694 ] 00:15:07.694 } 00:15:07.956 [2024-12-05 12:18:38.597047] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:07.956 [2024-12-05 12:18:38.597194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70744 ] 00:15:07.956 [2024-12-05 12:18:38.757268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.217 [2024-12-05 12:18:38.881632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.477 Running I/O for 5 seconds... 00:15:10.369 33618.00 IOPS, 131.32 MiB/s [2024-12-05T12:18:42.623Z] 33494.50 IOPS, 130.84 MiB/s [2024-12-05T12:18:43.568Z] 33512.33 IOPS, 130.91 MiB/s [2024-12-05T12:18:44.511Z] 33627.00 IOPS, 131.36 MiB/s 00:15:13.642 Latency(us) 00:15:13.642 [2024-12-05T12:18:44.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.642 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:13.642 xnvme_bdev : 5.00 33519.59 130.94 0.00 0.00 1905.06 220.55 8318.03 00:15:13.642 [2024-12-05T12:18:44.511Z] =================================================================================================================== 00:15:13.642 [2024-12-05T12:18:44.511Z] Total : 33519.59 130.94 0.00 0.00 1905.06 220.55 8318.03 00:15:14.597 00:15:14.597 real 0m12.976s 00:15:14.597 user 0m8.091s 00:15:14.597 sys 0m4.338s 00:15:14.597 12:18:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.597 ************************************ 00:15:14.597 END TEST xnvme_bdevperf 00:15:14.597 ************************************ 00:15:14.597 12:18:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:14.597 12:18:45 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:14.597 12:18:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:14.597 12:18:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.597 12:18:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.597 ************************************ 00:15:14.597 START TEST xnvme_fio_plugin 00:15:14.597 ************************************ 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:14.597 12:18:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.597 { 00:15:14.597 "subsystems": [ 00:15:14.597 { 00:15:14.597 "subsystem": "bdev", 00:15:14.597 "config": [ 00:15:14.597 { 00:15:14.597 "params": { 00:15:14.597 "io_mechanism": "io_uring", 00:15:14.597 "conserve_cpu": true, 00:15:14.597 "filename": "/dev/nvme0n1", 00:15:14.597 "name": "xnvme_bdev" 00:15:14.597 }, 00:15:14.597 "method": "bdev_xnvme_create" 00:15:14.597 }, 00:15:14.597 { 00:15:14.597 "method": "bdev_wait_for_examine" 00:15:14.597 } 00:15:14.597 ] 00:15:14.597 } 00:15:14.597 ] 00:15:14.597 } 00:15:14.597 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:14.597 fio-3.35 00:15:14.597 Starting 1 thread 00:15:21.193 00:15:21.193 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70869: Thu Dec 5 12:18:51 2024 00:15:21.193 read: IOPS=32.3k, BW=126MiB/s (132MB/s)(630MiB/5002msec) 00:15:21.193 slat (nsec): min=2879, max=67517, avg=3553.59, stdev=1834.82 00:15:21.193 clat (usec): min=1116, max=3803, avg=1837.24, stdev=252.47 00:15:21.193 lat (usec): min=1120, max=3833, avg=1840.79, stdev=252.80 00:15:21.193 clat percentiles (usec): 00:15:21.193 | 1.00th=[ 1385], 5.00th=[ 1483], 10.00th=[ 1549], 20.00th=[ 1614], 00:15:21.193 | 30.00th=[ 1680], 40.00th=[ 1745], 50.00th=[ 1811], 60.00th=[ 1876], 00:15:21.193 | 70.00th=[ 1942], 80.00th=[ 2040], 90.00th=[ 2180], 95.00th=[ 2278], 00:15:21.193 | 99.00th=[ 2540], 99.50th=[ 2671], 99.90th=[ 2868], 99.95th=[ 3097], 00:15:21.193 | 99.99th=[ 3654] 00:15:21.193 bw ( KiB/s): min=127488, max=130048, per=100.00%, avg=129137.78, stdev=950.23, samples=9 00:15:21.193 iops : min=31872, max=32512, avg=32284.44, stdev=237.56, samples=9 00:15:21.193 lat (msec) : 2=76.16%, 4=23.84% 00:15:21.193 cpu : usr=59.77%, sys=36.49%, ctx=13, majf=0, minf=762 00:15:21.193 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:21.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.193 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:21.193 issued rwts: total=161344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.193 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:21.193 00:15:21.193 Run status group 0 (all jobs): 00:15:21.193 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=630MiB (661MB), run=5002-5002msec 00:15:21.454 ----------------------------------------------------- 00:15:21.454 Suppressions used: 00:15:21.454 count bytes template 00:15:21.454 1 11 /usr/src/fio/parse.c 00:15:21.454 1 8 libtcmalloc_minimal.so 00:15:21.454 1 904 libcrypto.so 00:15:21.454 ----------------------------------------------------- 00:15:21.454 00:15:21.454 12:18:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:21.454 12:18:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.454 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.454 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:21.454 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:21.454 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:21.454 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.454 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:21.454 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:21.455 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:21.455 12:18:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:21.455 12:18:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:21.455 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:21.455 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.455 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:21.455 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:21.455 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:21.455 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:21.455 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:21.455 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:21.455 12:18:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.455 { 00:15:21.455 "subsystems": [ 00:15:21.455 { 00:15:21.455 "subsystem": "bdev", 00:15:21.455 "config": [ 00:15:21.455 { 00:15:21.455 "params": { 00:15:21.455 "io_mechanism": "io_uring", 00:15:21.455 "conserve_cpu": true, 00:15:21.455 "filename": "/dev/nvme0n1", 00:15:21.455 "name": "xnvme_bdev" 00:15:21.455 }, 00:15:21.455 "method": "bdev_xnvme_create" 00:15:21.455 }, 00:15:21.455 { 00:15:21.455 "method": "bdev_wait_for_examine" 00:15:21.455 } 00:15:21.455 ] 00:15:21.455 } 00:15:21.455 ] 00:15:21.455 } 00:15:21.715 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:21.715 fio-3.35 00:15:21.715 Starting 1 thread 00:15:28.299 00:15:28.299 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70961: Thu Dec 5 12:18:58 2024 00:15:28.299 write: IOPS=34.4k, BW=135MiB/s (141MB/s)(673MiB/5001msec); 0 zone resets 00:15:28.299 slat (nsec): min=2907, max=70529, avg=3793.30, stdev=1669.50 00:15:28.299 clat (usec): min=458, max=4491, avg=1709.00, stdev=257.51 00:15:28.299 lat (usec): min=472, max=4535, avg=1712.79, stdev=257.79 00:15:28.299 clat percentiles (usec): 00:15:28.299 | 1.00th=[ 1221], 5.00th=[ 1352], 10.00th=[ 1418], 20.00th=[ 1500], 00:15:28.299 | 30.00th=[ 1565], 40.00th=[ 1631], 50.00th=[ 1680], 60.00th=[ 1745], 00:15:28.299 | 70.00th=[ 1811], 80.00th=[ 1909], 90.00th=[ 2040], 95.00th=[ 2147], 00:15:28.299 | 99.00th=[ 2409], 99.50th=[ 2540], 99.90th=[ 2900], 99.95th=[ 3392], 00:15:28.299 | 99.99th=[ 4293] 00:15:28.299 bw ( KiB/s): min=135152, max=141728, per=99.70%, avg=137338.67, stdev=2140.96, samples=9 00:15:28.299 iops : min=33788, max=35432, avg=34334.67, stdev=535.24, samples=9 00:15:28.299 lat (usec) : 500=0.01%, 1000=0.01% 00:15:28.299 lat (msec) : 2=87.42%, 4=12.54%, 10=0.03% 00:15:28.299 cpu : usr=66.58%, sys=30.22%, ctx=13, majf=0, minf=763 00:15:28.299 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:28.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.299 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:28.299 issued rwts: total=0,172225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.299 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.299 00:15:28.299 Run status group 0 (all jobs): 00:15:28.299 WRITE: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=673MiB (705MB), run=5001-5001msec 00:15:28.561 ----------------------------------------------------- 00:15:28.561 Suppressions used: 00:15:28.561 count bytes template 00:15:28.561 1 11 /usr/src/fio/parse.c 00:15:28.561 1 8 libtcmalloc_minimal.so 00:15:28.561 1 904 libcrypto.so 00:15:28.561 ----------------------------------------------------- 00:15:28.561 00:15:28.561 00:15:28.561 real 0m14.173s 00:15:28.561 user 0m9.385s 00:15:28.561 sys 0m4.108s 00:15:28.561 12:18:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.561 ************************************ 00:15:28.561 END TEST xnvme_fio_plugin 00:15:28.561 ************************************ 00:15:28.561 12:18:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:28.561 12:18:59 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:28.561 12:18:59 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:15:28.561 12:18:59 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:15:28.561 12:18:59 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:15:28.561 12:18:59 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:28.561 12:18:59 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:28.561 12:18:59 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:28.561 12:18:59 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:28.561 12:18:59 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:28.561 12:18:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:28.561 12:18:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.561 12:18:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.561 ************************************ 00:15:28.561 START TEST xnvme_rpc 00:15:28.561 ************************************ 00:15:28.561 12:18:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:28.561 12:18:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:28.561 12:18:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:28.561 12:18:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:28.561 12:18:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:28.561 12:18:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71047 00:15:28.561 12:18:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71047 00:15:28.561 12:18:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:28.561 12:18:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71047 ']' 00:15:28.561 12:18:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.561 12:18:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.561 12:18:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.561 12:18:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.561 12:18:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.823 [2024-12-05 12:18:59.508578] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:28.823 [2024-12-05 12:18:59.508754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71047 ] 00:15:28.823 [2024-12-05 12:18:59.677542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.084 [2024-12-05 12:18:59.838163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.027 xnvme_bdev 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71047 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71047 ']' 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71047 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71047 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.027 killing process with pid 71047 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71047' 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71047 00:15:30.027 12:19:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71047 00:15:31.964 00:15:31.964 real 0m3.132s 00:15:31.964 user 0m3.032s 00:15:31.964 sys 0m0.589s 00:15:31.964 12:19:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.964 ************************************ 00:15:31.964 END TEST xnvme_rpc 00:15:31.964 ************************************ 00:15:31.964 12:19:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.964 12:19:02 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:31.964 12:19:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:31.964 12:19:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.964 12:19:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.964 ************************************ 00:15:31.964 START TEST xnvme_bdevperf 00:15:31.964 ************************************ 00:15:31.964 12:19:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:31.964 12:19:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:31.964 12:19:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:31.964 12:19:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:31.964 12:19:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:31.964 12:19:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:31.964 12:19:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:31.964 12:19:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:31.964 { 00:15:31.964 "subsystems": [ 00:15:31.964 { 00:15:31.964 "subsystem": "bdev", 00:15:31.964 "config": [ 00:15:31.964 { 00:15:31.964 "params": { 00:15:31.964 "io_mechanism": "io_uring_cmd", 00:15:31.964 "conserve_cpu": false, 00:15:31.964 "filename": "/dev/ng0n1", 00:15:31.964 "name": "xnvme_bdev" 00:15:31.964 }, 00:15:31.964 "method": "bdev_xnvme_create" 00:15:31.964 }, 00:15:31.964 { 00:15:31.964 "method": "bdev_wait_for_examine" 00:15:31.964 } 00:15:31.964 ] 00:15:31.964 } 00:15:31.964 ] 00:15:31.964 } 00:15:31.964 [2024-12-05 12:19:02.651392] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:31.964 [2024-12-05 12:19:02.651520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71121 ] 00:15:31.964 [2024-12-05 12:19:02.812123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.222 [2024-12-05 12:19:02.925064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.481 Running I/O for 5 seconds... 00:15:34.348 61868.00 IOPS, 241.67 MiB/s [2024-12-05T12:19:06.595Z] 62611.50 IOPS, 244.58 MiB/s [2024-12-05T12:19:07.534Z] 56845.67 IOPS, 222.05 MiB/s [2024-12-05T12:19:08.470Z] 52193.25 IOPS, 203.88 MiB/s 00:15:37.601 Latency(us) 00:15:37.601 [2024-12-05T12:19:08.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.601 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:37.601 xnvme_bdev : 5.00 51349.97 200.59 0.00 0.00 1242.48 441.11 7763.50 00:15:37.601 [2024-12-05T12:19:08.470Z] =================================================================================================================== 00:15:37.601 [2024-12-05T12:19:08.470Z] Total : 51349.97 200.59 0.00 0.00 1242.48 441.11 7763.50 00:15:38.168 12:19:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:38.168 12:19:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:38.168 12:19:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:38.168 12:19:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:38.168 12:19:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:38.168 { 00:15:38.168 "subsystems": [ 00:15:38.168 { 00:15:38.168 "subsystem": "bdev", 00:15:38.168 "config": [ 00:15:38.168 { 00:15:38.168 "params": { 00:15:38.168 "io_mechanism": "io_uring_cmd", 00:15:38.168 "conserve_cpu": false, 00:15:38.168 "filename": "/dev/ng0n1", 00:15:38.168 "name": "xnvme_bdev" 00:15:38.168 }, 00:15:38.168 "method": "bdev_xnvme_create" 00:15:38.168 }, 00:15:38.168 { 00:15:38.168 "method": "bdev_wait_for_examine" 00:15:38.168 } 00:15:38.168 ] 00:15:38.168 } 00:15:38.168 ] 00:15:38.168 } 00:15:38.168 [2024-12-05 12:19:09.007309] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:38.168 [2024-12-05 12:19:09.007437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71190 ] 00:15:38.427 [2024-12-05 12:19:09.166775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.427 [2024-12-05 12:19:09.280255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.686 Running I/O for 5 seconds... 00:15:40.992 49920.00 IOPS, 195.00 MiB/s [2024-12-05T12:19:12.796Z] 50016.00 IOPS, 195.38 MiB/s [2024-12-05T12:19:13.730Z] 49123.33 IOPS, 191.89 MiB/s [2024-12-05T12:19:14.789Z] 48538.50 IOPS, 189.60 MiB/s 00:15:43.920 Latency(us) 00:15:43.920 [2024-12-05T12:19:14.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.920 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:43.920 xnvme_bdev : 5.00 48217.03 188.35 0.00 0.00 1323.32 850.71 4360.66 00:15:43.920 [2024-12-05T12:19:14.789Z] =================================================================================================================== 00:15:43.920 [2024-12-05T12:19:14.789Z] Total : 48217.03 188.35 0.00 0.00 1323.32 850.71 4360.66 00:15:44.861 12:19:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:44.861 12:19:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:44.861 12:19:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:44.861 12:19:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:44.861 12:19:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:44.861 { 00:15:44.861 "subsystems": [ 00:15:44.861 { 00:15:44.861 "subsystem": "bdev", 00:15:44.861 "config": [ 00:15:44.861 { 00:15:44.861 "params": { 00:15:44.861 "io_mechanism": "io_uring_cmd", 00:15:44.861 "conserve_cpu": false, 00:15:44.861 "filename": "/dev/ng0n1", 00:15:44.861 "name": "xnvme_bdev" 00:15:44.861 }, 00:15:44.861 "method": "bdev_xnvme_create" 00:15:44.861 }, 00:15:44.861 { 00:15:44.861 "method": "bdev_wait_for_examine" 00:15:44.861 } 00:15:44.861 ] 00:15:44.861 } 00:15:44.861 ] 00:15:44.861 } 00:15:44.861 [2024-12-05 12:19:15.484850] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:44.861 [2024-12-05 12:19:15.485020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71264 ] 00:15:44.861 [2024-12-05 12:19:15.649421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.121 [2024-12-05 12:19:15.801256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.382 Running I/O for 5 seconds... 00:15:47.313 70656.00 IOPS, 276.00 MiB/s [2024-12-05T12:19:19.571Z] 70528.00 IOPS, 275.50 MiB/s [2024-12-05T12:19:20.144Z] 70805.33 IOPS, 276.58 MiB/s [2024-12-05T12:19:21.527Z] 70544.00 IOPS, 275.56 MiB/s 00:15:50.658 Latency(us) 00:15:50.658 [2024-12-05T12:19:21.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.658 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:50.658 xnvme_bdev : 5.00 70325.31 274.71 0.00 0.00 906.15 570.29 5873.03 00:15:50.658 [2024-12-05T12:19:21.527Z] =================================================================================================================== 00:15:50.658 [2024-12-05T12:19:21.527Z] Total : 70325.31 274.71 0.00 0.00 906.15 570.29 5873.03 00:15:51.229 12:19:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:51.229 12:19:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:51.229 12:19:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:51.229 12:19:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:51.229 12:19:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:51.229 { 00:15:51.229 "subsystems": [ 00:15:51.229 { 00:15:51.229 "subsystem": "bdev", 00:15:51.229 "config": [ 00:15:51.229 { 00:15:51.229 "params": { 00:15:51.229 "io_mechanism": "io_uring_cmd", 00:15:51.229 "conserve_cpu": false, 00:15:51.229 "filename": "/dev/ng0n1", 00:15:51.229 "name": "xnvme_bdev" 00:15:51.229 }, 00:15:51.229 "method": "bdev_xnvme_create" 00:15:51.230 }, 00:15:51.230 { 00:15:51.230 "method": "bdev_wait_for_examine" 00:15:51.230 } 00:15:51.230 ] 00:15:51.230 } 00:15:51.230 ] 00:15:51.230 } 00:15:51.490 [2024-12-05 12:19:22.099234] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:51.490 [2024-12-05 12:19:22.099401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71344 ] 00:15:51.490 [2024-12-05 12:19:22.266652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.750 [2024-12-05 12:19:22.424659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.010 Running I/O for 5 seconds... 00:15:54.335 268.00 IOPS, 1.05 MiB/s [2024-12-05T12:19:25.773Z] 298.50 IOPS, 1.17 MiB/s [2024-12-05T12:19:27.150Z] 293.67 IOPS, 1.15 MiB/s [2024-12-05T12:19:28.093Z] 428.75 IOPS, 1.67 MiB/s [2024-12-05T12:19:28.093Z] 417.60 IOPS, 1.63 MiB/s 00:15:57.224 Latency(us) 00:15:57.224 [2024-12-05T12:19:28.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.224 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:57.224 xnvme_bdev : 5.11 421.29 1.65 0.00 0.00 150448.75 79.16 609787.27 00:15:57.224 [2024-12-05T12:19:28.093Z] =================================================================================================================== 00:15:57.224 [2024-12-05T12:19:28.093Z] Total : 421.29 1.65 0.00 0.00 150448.75 79.16 609787.27 00:15:57.797 00:15:57.797 real 0m26.070s 00:15:57.797 user 0m14.878s 00:15:57.797 sys 0m10.769s 00:15:57.797 12:19:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.797 ************************************ 00:15:57.797 END TEST xnvme_bdevperf 00:15:57.797 ************************************ 00:15:57.797 12:19:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:58.057 12:19:28 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:58.057 12:19:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:58.057 12:19:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.057 12:19:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.057 ************************************ 00:15:58.057 START TEST xnvme_fio_plugin 00:15:58.057 ************************************ 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:58.057 12:19:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:58.057 { 00:15:58.057 "subsystems": [ 00:15:58.057 { 00:15:58.057 "subsystem": "bdev", 00:15:58.057 "config": [ 00:15:58.057 { 00:15:58.057 "params": { 00:15:58.057 "io_mechanism": "io_uring_cmd", 00:15:58.057 "conserve_cpu": false, 00:15:58.057 "filename": "/dev/ng0n1", 00:15:58.057 "name": "xnvme_bdev" 00:15:58.057 }, 00:15:58.057 "method": "bdev_xnvme_create" 00:15:58.057 }, 00:15:58.057 { 00:15:58.057 "method": "bdev_wait_for_examine" 00:15:58.057 } 00:15:58.057 ] 00:15:58.057 } 00:15:58.057 ] 00:15:58.057 } 00:15:58.057 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:58.057 fio-3.35 00:15:58.057 Starting 1 thread 00:16:04.645 00:16:04.645 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71462: Thu Dec 5 12:19:34 2024 00:16:04.645 read: IOPS=41.0k, BW=160MiB/s (168MB/s)(801MiB/5001msec) 00:16:04.645 slat (usec): min=2, max=327, avg= 3.72, stdev= 2.40 00:16:04.645 clat (usec): min=644, max=5376, avg=1412.25, stdev=288.73 00:16:04.645 lat (usec): min=648, max=5387, avg=1415.97, stdev=289.12 00:16:04.645 clat percentiles (usec): 00:16:04.645 | 1.00th=[ 881], 5.00th=[ 1020], 10.00th=[ 1090], 20.00th=[ 1188], 00:16:04.645 | 30.00th=[ 1254], 40.00th=[ 1319], 50.00th=[ 1369], 60.00th=[ 1434], 00:16:04.645 | 70.00th=[ 1516], 80.00th=[ 1614], 90.00th=[ 1778], 95.00th=[ 1926], 00:16:04.645 | 99.00th=[ 2278], 99.50th=[ 2409], 99.90th=[ 2802], 99.95th=[ 2999], 00:16:04.645 | 99.99th=[ 5342] 00:16:04.645 bw ( KiB/s): min=153600, max=178176, per=100.00%, avg=165859.56, stdev=8356.79, samples=9 00:16:04.645 iops : min=38400, max=44544, avg=41464.89, stdev=2089.20, samples=9 00:16:04.645 lat (usec) : 750=0.10%, 1000=4.09% 00:16:04.645 lat (msec) : 2=92.36%, 4=3.42%, 10=0.03% 00:16:04.645 cpu : usr=35.44%, sys=62.94%, ctx=44, majf=0, minf=762 00:16:04.645 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:04.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.645 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:04.645 issued rwts: total=205152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:04.645 00:16:04.645 Run status group 0 (all jobs): 00:16:04.645 READ: bw=160MiB/s (168MB/s), 160MiB/s-160MiB/s (168MB/s-168MB/s), io=801MiB (840MB), run=5001-5001msec 00:16:04.906 ----------------------------------------------------- 00:16:04.906 Suppressions used: 00:16:04.906 count bytes template 00:16:04.906 1 11 /usr/src/fio/parse.c 00:16:04.906 1 8 libtcmalloc_minimal.so 00:16:04.906 1 904 libcrypto.so 00:16:04.906 ----------------------------------------------------- 00:16:04.906 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:04.906 12:19:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:04.906 { 00:16:04.906 "subsystems": [ 00:16:04.906 { 00:16:04.906 "subsystem": "bdev", 00:16:04.906 "config": [ 00:16:04.906 { 00:16:04.906 "params": { 00:16:04.906 "io_mechanism": "io_uring_cmd", 00:16:04.906 "conserve_cpu": false, 00:16:04.906 "filename": "/dev/ng0n1", 00:16:04.906 "name": "xnvme_bdev" 00:16:04.906 }, 00:16:04.906 "method": "bdev_xnvme_create" 00:16:04.906 }, 00:16:04.906 { 00:16:04.906 "method": "bdev_wait_for_examine" 00:16:04.906 } 00:16:04.906 ] 00:16:04.906 } 00:16:04.906 ] 00:16:04.906 } 00:16:05.166 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:05.167 fio-3.35 00:16:05.167 Starting 1 thread 00:16:11.754 00:16:11.754 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71553: Thu Dec 5 12:19:41 2024 00:16:11.754 write: IOPS=41.1k, BW=161MiB/s (168MB/s)(803MiB/5001msec); 0 zone resets 00:16:11.754 slat (nsec): min=2913, max=96186, avg=3731.29, stdev=1706.71 00:16:11.754 clat (usec): min=187, max=7996, avg=1412.25, stdev=282.11 00:16:11.754 lat (usec): min=191, max=8000, avg=1415.98, stdev=282.43 00:16:11.754 clat percentiles (usec): 00:16:11.754 | 1.00th=[ 963], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1188], 00:16:11.754 | 30.00th=[ 1237], 40.00th=[ 1303], 50.00th=[ 1369], 60.00th=[ 1434], 00:16:11.754 | 70.00th=[ 1516], 80.00th=[ 1631], 90.00th=[ 1778], 95.00th=[ 1926], 00:16:11.754 | 99.00th=[ 2278], 99.50th=[ 2409], 99.90th=[ 2933], 99.95th=[ 3261], 00:16:11.754 | 99.99th=[ 3916] 00:16:11.754 bw ( KiB/s): min=155704, max=172320, per=100.00%, avg=166726.78, stdev=5359.90, samples=9 00:16:11.754 iops : min=38926, max=43080, avg=41681.89, stdev=1340.12, samples=9 00:16:11.754 lat (usec) : 250=0.01%, 500=0.01%, 750=0.04%, 1000=1.98% 00:16:11.754 lat (msec) : 2=94.60%, 4=3.35%, 10=0.01% 00:16:11.754 cpu : usr=39.84%, sys=59.06%, ctx=31, majf=0, minf=763 00:16:11.754 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.3%, 16=24.8%, 32=50.7%, >=64=1.6% 00:16:11.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.754 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:11.754 issued rwts: total=0,205668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.754 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:11.754 00:16:11.754 Run status group 0 (all jobs): 00:16:11.754 WRITE: bw=161MiB/s (168MB/s), 161MiB/s-161MiB/s (168MB/s-168MB/s), io=803MiB (842MB), run=5001-5001msec 00:16:12.016 ----------------------------------------------------- 00:16:12.016 Suppressions used: 00:16:12.016 count bytes template 00:16:12.016 1 11 /usr/src/fio/parse.c 00:16:12.016 1 8 libtcmalloc_minimal.so 00:16:12.016 1 904 libcrypto.so 00:16:12.016 ----------------------------------------------------- 00:16:12.016 00:16:12.016 00:16:12.016 real 0m13.972s 00:16:12.016 user 0m6.760s 00:16:12.016 sys 0m6.759s 00:16:12.016 ************************************ 00:16:12.016 END TEST xnvme_fio_plugin 00:16:12.016 ************************************ 00:16:12.016 12:19:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.016 12:19:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:12.016 12:19:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:12.016 12:19:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:12.016 12:19:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:12.016 12:19:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:12.016 12:19:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:12.016 12:19:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.016 12:19:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.016 ************************************ 00:16:12.016 START TEST xnvme_rpc 00:16:12.016 ************************************ 00:16:12.016 12:19:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:12.016 12:19:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:12.016 12:19:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:12.016 12:19:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:12.016 12:19:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:12.016 12:19:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71638 00:16:12.016 12:19:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71638 00:16:12.016 12:19:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71638 ']' 00:16:12.016 12:19:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.016 12:19:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.016 12:19:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.016 12:19:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.016 12:19:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.016 12:19:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:12.016 [2024-12-05 12:19:42.859239] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:12.016 [2024-12-05 12:19:42.860333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71638 ] 00:16:12.278 [2024-12-05 12:19:43.027883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.539 [2024-12-05 12:19:43.166684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.111 12:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.111 12:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:13.111 12:19:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:16:13.111 12:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.111 12:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.418 xnvme_bdev 00:16:13.418 12:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.418 12:19:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:13.418 12:19:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:13.418 12:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.418 12:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.418 12:19:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:13.418 12:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71638 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71638 ']' 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71638 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71638 00:16:13.418 killing process with pid 71638 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71638' 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71638 00:16:13.418 12:19:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71638 00:16:15.335 ************************************ 00:16:15.335 END TEST xnvme_rpc 00:16:15.335 ************************************ 00:16:15.335 00:16:15.335 real 0m3.228s 00:16:15.335 user 0m3.124s 00:16:15.335 sys 0m0.599s 00:16:15.335 12:19:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.335 12:19:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:15.335 12:19:46 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:15.335 12:19:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:15.335 12:19:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.335 12:19:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:15.335 ************************************ 00:16:15.335 START TEST xnvme_bdevperf 00:16:15.335 ************************************ 00:16:15.335 12:19:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:15.335 12:19:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:15.335 12:19:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:15.335 12:19:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:15.335 12:19:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:15.335 12:19:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:15.335 12:19:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:15.335 12:19:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:15.335 { 00:16:15.335 "subsystems": [ 00:16:15.335 { 00:16:15.335 "subsystem": "bdev", 00:16:15.335 "config": [ 00:16:15.335 { 00:16:15.335 "params": { 00:16:15.335 "io_mechanism": "io_uring_cmd", 00:16:15.335 "conserve_cpu": true, 00:16:15.335 "filename": "/dev/ng0n1", 00:16:15.335 "name": "xnvme_bdev" 00:16:15.335 }, 00:16:15.335 "method": "bdev_xnvme_create" 00:16:15.335 }, 00:16:15.335 { 00:16:15.335 "method": "bdev_wait_for_examine" 00:16:15.335 } 00:16:15.335 ] 00:16:15.335 } 00:16:15.335 ] 00:16:15.335 } 00:16:15.335 [2024-12-05 12:19:46.135102] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:15.335 [2024-12-05 12:19:46.135261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71715 ] 00:16:15.596 [2024-12-05 12:19:46.304041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.596 [2024-12-05 12:19:46.451261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.166 Running I/O for 5 seconds... 00:16:18.050 41024.00 IOPS, 160.25 MiB/s [2024-12-05T12:19:49.864Z] 42304.00 IOPS, 165.25 MiB/s [2024-12-05T12:19:50.806Z] 42517.33 IOPS, 166.08 MiB/s [2024-12-05T12:19:52.192Z] 42880.00 IOPS, 167.50 MiB/s 00:16:21.323 Latency(us) 00:16:21.323 [2024-12-05T12:19:52.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.323 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:21.323 xnvme_bdev : 5.00 42553.11 166.22 0.00 0.00 1500.55 863.31 3528.86 00:16:21.323 [2024-12-05T12:19:52.192Z] =================================================================================================================== 00:16:21.323 [2024-12-05T12:19:52.192Z] Total : 42553.11 166.22 0.00 0.00 1500.55 863.31 3528.86 00:16:21.895 12:19:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:21.895 12:19:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:21.895 12:19:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:21.895 12:19:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:21.895 12:19:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:22.158 { 00:16:22.158 "subsystems": [ 00:16:22.158 { 00:16:22.158 "subsystem": "bdev", 00:16:22.158 "config": [ 00:16:22.158 { 00:16:22.158 "params": { 00:16:22.158 "io_mechanism": "io_uring_cmd", 00:16:22.158 "conserve_cpu": true, 00:16:22.158 "filename": "/dev/ng0n1", 00:16:22.158 "name": "xnvme_bdev" 00:16:22.158 }, 00:16:22.158 "method": "bdev_xnvme_create" 00:16:22.158 }, 00:16:22.158 { 00:16:22.158 "method": "bdev_wait_for_examine" 00:16:22.158 } 00:16:22.158 ] 00:16:22.158 } 00:16:22.158 ] 00:16:22.158 } 00:16:22.158 [2024-12-05 12:19:52.805689] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:22.158 [2024-12-05 12:19:52.806022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71797 ] 00:16:22.158 [2024-12-05 12:19:52.975975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.419 [2024-12-05 12:19:53.121726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.681 Running I/O for 5 seconds... 00:16:25.007 43317.00 IOPS, 169.21 MiB/s [2024-12-05T12:19:56.819Z] 43177.00 IOPS, 168.66 MiB/s [2024-12-05T12:19:57.764Z] 43291.00 IOPS, 169.11 MiB/s [2024-12-05T12:19:58.709Z] 43425.25 IOPS, 169.63 MiB/s [2024-12-05T12:19:58.709Z] 42982.00 IOPS, 167.90 MiB/s 00:16:27.840 Latency(us) 00:16:27.840 [2024-12-05T12:19:58.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.840 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:27.840 xnvme_bdev : 5.00 42960.33 167.81 0.00 0.00 1485.29 715.22 6503.19 00:16:27.840 [2024-12-05T12:19:58.709Z] =================================================================================================================== 00:16:27.840 [2024-12-05T12:19:58.709Z] Total : 42960.33 167.81 0.00 0.00 1485.29 715.22 6503.19 00:16:28.784 12:19:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:28.784 12:19:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:28.784 12:19:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:28.784 12:19:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:28.784 12:19:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:28.784 { 00:16:28.784 "subsystems": [ 00:16:28.784 { 00:16:28.784 "subsystem": "bdev", 00:16:28.784 "config": [ 00:16:28.784 { 00:16:28.784 "params": { 00:16:28.784 "io_mechanism": "io_uring_cmd", 00:16:28.784 "conserve_cpu": true, 00:16:28.784 "filename": "/dev/ng0n1", 00:16:28.784 "name": "xnvme_bdev" 00:16:28.784 }, 00:16:28.784 "method": "bdev_xnvme_create" 00:16:28.784 }, 00:16:28.784 { 00:16:28.784 "method": "bdev_wait_for_examine" 00:16:28.784 } 00:16:28.784 ] 00:16:28.784 } 00:16:28.784 ] 00:16:28.784 } 00:16:28.784 [2024-12-05 12:19:59.420743] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:28.785 [2024-12-05 12:19:59.420889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71871 ] 00:16:28.785 [2024-12-05 12:19:59.586186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.046 [2024-12-05 12:19:59.727655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.307 Running I/O for 5 seconds... 00:16:31.640 71808.00 IOPS, 280.50 MiB/s [2024-12-05T12:20:03.082Z] 71808.00 IOPS, 280.50 MiB/s [2024-12-05T12:20:04.465Z] 73984.00 IOPS, 289.00 MiB/s [2024-12-05T12:20:05.400Z] 74240.00 IOPS, 290.00 MiB/s [2024-12-05T12:20:05.400Z] 78246.40 IOPS, 305.65 MiB/s 00:16:34.531 Latency(us) 00:16:34.531 [2024-12-05T12:20:05.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.531 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:34.531 xnvme_bdev : 5.00 78234.67 305.60 0.00 0.00 814.53 341.86 4612.73 00:16:34.531 [2024-12-05T12:20:05.400Z] =================================================================================================================== 00:16:34.531 [2024-12-05T12:20:05.400Z] Total : 78234.67 305.60 0.00 0.00 814.53 341.86 4612.73 00:16:35.100 12:20:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:35.100 12:20:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:35.100 12:20:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:35.100 12:20:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:35.100 12:20:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:35.100 { 00:16:35.100 "subsystems": [ 00:16:35.100 { 00:16:35.100 "subsystem": "bdev", 00:16:35.100 "config": [ 00:16:35.100 { 00:16:35.100 "params": { 00:16:35.100 "io_mechanism": "io_uring_cmd", 00:16:35.100 "conserve_cpu": true, 00:16:35.100 "filename": "/dev/ng0n1", 00:16:35.100 "name": "xnvme_bdev" 00:16:35.100 }, 00:16:35.100 "method": "bdev_xnvme_create" 00:16:35.100 }, 00:16:35.100 { 00:16:35.100 "method": "bdev_wait_for_examine" 00:16:35.100 } 00:16:35.100 ] 00:16:35.100 } 00:16:35.100 ] 00:16:35.100 } 00:16:35.100 [2024-12-05 12:20:05.742614] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:35.100 [2024-12-05 12:20:05.742738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71945 ] 00:16:35.100 [2024-12-05 12:20:05.899682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.359 [2024-12-05 12:20:05.988098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.359 Running I/O for 5 seconds... 00:16:37.679 2673.00 IOPS, 10.44 MiB/s [2024-12-05T12:20:09.485Z] 25443.50 IOPS, 99.39 MiB/s [2024-12-05T12:20:10.429Z] 34992.33 IOPS, 136.69 MiB/s [2024-12-05T12:20:11.365Z] 37837.75 IOPS, 147.80 MiB/s [2024-12-05T12:20:11.365Z] 38343.80 IOPS, 149.78 MiB/s 00:16:40.496 Latency(us) 00:16:40.496 [2024-12-05T12:20:11.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.496 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:40.496 xnvme_bdev : 5.00 38333.11 149.74 0.00 0.00 1664.72 78.38 142767.66 00:16:40.496 [2024-12-05T12:20:11.365Z] =================================================================================================================== 00:16:40.496 [2024-12-05T12:20:11.365Z] Total : 38333.11 149.74 0.00 0.00 1664.72 78.38 142767.66 00:16:41.440 ************************************ 00:16:41.440 END TEST xnvme_bdevperf 00:16:41.440 ************************************ 00:16:41.440 00:16:41.440 real 0m26.036s 00:16:41.440 user 0m19.394s 00:16:41.440 sys 0m4.700s 00:16:41.440 12:20:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.440 12:20:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:41.440 12:20:12 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:41.440 12:20:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:41.440 12:20:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.440 12:20:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:41.440 ************************************ 00:16:41.440 START TEST xnvme_fio_plugin 00:16:41.440 ************************************ 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:41.440 12:20:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:41.440 { 00:16:41.440 "subsystems": [ 00:16:41.440 { 00:16:41.440 "subsystem": "bdev", 00:16:41.440 "config": [ 00:16:41.440 { 00:16:41.440 "params": { 00:16:41.440 "io_mechanism": "io_uring_cmd", 00:16:41.440 "conserve_cpu": true, 00:16:41.440 "filename": "/dev/ng0n1", 00:16:41.440 "name": "xnvme_bdev" 00:16:41.440 }, 00:16:41.440 "method": "bdev_xnvme_create" 00:16:41.440 }, 00:16:41.440 { 00:16:41.440 "method": "bdev_wait_for_examine" 00:16:41.440 } 00:16:41.440 ] 00:16:41.440 } 00:16:41.440 ] 00:16:41.440 } 00:16:41.702 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:41.702 fio-3.35 00:16:41.702 Starting 1 thread 00:16:48.291 00:16:48.291 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72059: Thu Dec 5 12:20:18 2024 00:16:48.291 read: IOPS=42.2k, BW=165MiB/s (173MB/s)(825MiB/5001msec) 00:16:48.291 slat (usec): min=2, max=133, avg= 3.22, stdev= 1.47 00:16:48.291 clat (usec): min=696, max=7614, avg=1386.40, stdev=252.07 00:16:48.291 lat (usec): min=699, max=7617, avg=1389.62, stdev=252.37 00:16:48.291 clat percentiles (usec): 00:16:48.291 | 1.00th=[ 1012], 5.00th=[ 1090], 10.00th=[ 1123], 20.00th=[ 1188], 00:16:48.291 | 30.00th=[ 1221], 40.00th=[ 1270], 50.00th=[ 1319], 60.00th=[ 1401], 00:16:48.291 | 70.00th=[ 1483], 80.00th=[ 1582], 90.00th=[ 1729], 95.00th=[ 1860], 00:16:48.291 | 99.00th=[ 2147], 99.50th=[ 2278], 99.90th=[ 2573], 99.95th=[ 3064], 00:16:48.291 | 99.99th=[ 3687] 00:16:48.291 bw ( KiB/s): min=145408, max=178688, per=99.55%, avg=168217.78, stdev=10356.53, samples=9 00:16:48.291 iops : min=36352, max=44672, avg=42054.44, stdev=2589.13, samples=9 00:16:48.291 lat (usec) : 750=0.01%, 1000=0.68% 00:16:48.291 lat (msec) : 2=96.97%, 4=2.34%, 10=0.01% 00:16:48.291 cpu : usr=78.12%, sys=19.12%, ctx=13, majf=0, minf=762 00:16:48.291 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:48.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.291 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:48.291 issued rwts: total=211267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.291 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:48.291 00:16:48.291 Run status group 0 (all jobs): 00:16:48.291 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=825MiB (865MB), run=5001-5001msec 00:16:48.553 ----------------------------------------------------- 00:16:48.553 Suppressions used: 00:16:48.553 count bytes template 00:16:48.553 1 11 /usr/src/fio/parse.c 00:16:48.553 1 8 libtcmalloc_minimal.so 00:16:48.553 1 904 libcrypto.so 00:16:48.553 ----------------------------------------------------- 00:16:48.553 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:48.553 12:20:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:48.553 { 00:16:48.553 "subsystems": [ 00:16:48.553 { 00:16:48.553 "subsystem": "bdev", 00:16:48.553 "config": [ 00:16:48.553 { 00:16:48.553 "params": { 00:16:48.553 "io_mechanism": "io_uring_cmd", 00:16:48.553 "conserve_cpu": true, 00:16:48.553 "filename": "/dev/ng0n1", 00:16:48.553 "name": "xnvme_bdev" 00:16:48.553 }, 00:16:48.553 "method": "bdev_xnvme_create" 00:16:48.553 }, 00:16:48.553 { 00:16:48.553 "method": "bdev_wait_for_examine" 00:16:48.553 } 00:16:48.553 ] 00:16:48.553 } 00:16:48.553 ] 00:16:48.553 } 00:16:48.815 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:48.815 fio-3.35 00:16:48.815 Starting 1 thread 00:16:55.409 00:16:55.409 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72151: Thu Dec 5 12:20:25 2024 00:16:55.409 write: IOPS=41.6k, BW=162MiB/s (170MB/s)(812MiB/5001msec); 0 zone resets 00:16:55.409 slat (usec): min=2, max=176, avg= 3.90, stdev= 2.01 00:16:55.409 clat (usec): min=399, max=6018, avg=1388.18, stdev=278.34 00:16:55.409 lat (usec): min=402, max=6022, avg=1392.08, stdev=278.96 00:16:55.409 clat percentiles (usec): 00:16:55.409 | 1.00th=[ 996], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1172], 00:16:55.409 | 30.00th=[ 1221], 40.00th=[ 1270], 50.00th=[ 1319], 60.00th=[ 1401], 00:16:55.409 | 70.00th=[ 1483], 80.00th=[ 1582], 90.00th=[ 1745], 95.00th=[ 1893], 00:16:55.409 | 99.00th=[ 2278], 99.50th=[ 2507], 99.90th=[ 3064], 99.95th=[ 3458], 00:16:55.409 | 99.99th=[ 4621] 00:16:55.409 bw ( KiB/s): min=141488, max=178752, per=99.58%, avg=165624.00, stdev=13397.52, samples=9 00:16:55.409 iops : min=35372, max=44688, avg=41406.00, stdev=3349.38, samples=9 00:16:55.409 lat (usec) : 500=0.01%, 750=0.01%, 1000=1.15% 00:16:55.409 lat (msec) : 2=95.89%, 4=2.93%, 10=0.02% 00:16:55.409 cpu : usr=63.62%, sys=30.54%, ctx=34, majf=0, minf=763 00:16:55.409 IO depths : 1=1.4%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.4%, >=64=1.7% 00:16:55.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.409 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:55.409 issued rwts: total=0,207952,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.409 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:55.409 00:16:55.409 Run status group 0 (all jobs): 00:16:55.409 WRITE: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=812MiB (852MB), run=5001-5001msec 00:16:55.409 ----------------------------------------------------- 00:16:55.670 Suppressions used: 00:16:55.670 count bytes template 00:16:55.670 1 11 /usr/src/fio/parse.c 00:16:55.670 1 8 libtcmalloc_minimal.so 00:16:55.670 1 904 libcrypto.so 00:16:55.670 ----------------------------------------------------- 00:16:55.670 00:16:55.670 00:16:55.670 real 0m14.141s 00:16:55.670 user 0m10.094s 00:16:55.670 sys 0m3.281s 00:16:55.670 12:20:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.670 ************************************ 00:16:55.670 END TEST xnvme_fio_plugin 00:16:55.670 12:20:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:55.670 ************************************ 00:16:55.670 Process with pid 71638 is not found 00:16:55.670 12:20:26 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71638 00:16:55.670 12:20:26 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71638 ']' 00:16:55.670 12:20:26 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71638 00:16:55.670 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71638) - No such process 00:16:55.670 12:20:26 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71638 is not found' 00:16:55.670 ************************************ 00:16:55.670 END TEST nvme_xnvme 00:16:55.670 ************************************ 00:16:55.670 00:16:55.670 real 3m35.346s 00:16:55.670 user 2m2.792s 00:16:55.670 sys 1m17.909s 00:16:55.670 12:20:26 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.670 12:20:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:55.670 12:20:26 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:55.670 12:20:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:55.670 12:20:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.670 12:20:26 -- common/autotest_common.sh@10 -- # set +x 00:16:55.670 ************************************ 00:16:55.670 START TEST blockdev_xnvme 00:16:55.670 ************************************ 00:16:55.670 12:20:26 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:55.670 * Looking for test storage... 00:16:55.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:55.670 12:20:26 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:55.670 12:20:26 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:55.670 12:20:26 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:55.935 12:20:26 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.935 12:20:26 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:16:55.935 12:20:26 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.935 12:20:26 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:55.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.935 --rc genhtml_branch_coverage=1 00:16:55.935 --rc genhtml_function_coverage=1 00:16:55.935 --rc genhtml_legend=1 00:16:55.935 --rc geninfo_all_blocks=1 00:16:55.935 --rc geninfo_unexecuted_blocks=1 00:16:55.935 00:16:55.935 ' 00:16:55.935 12:20:26 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:55.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.935 --rc genhtml_branch_coverage=1 00:16:55.935 --rc genhtml_function_coverage=1 00:16:55.935 --rc genhtml_legend=1 00:16:55.935 --rc geninfo_all_blocks=1 00:16:55.935 --rc geninfo_unexecuted_blocks=1 00:16:55.935 00:16:55.935 ' 00:16:55.935 12:20:26 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:55.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.935 --rc genhtml_branch_coverage=1 00:16:55.935 --rc genhtml_function_coverage=1 00:16:55.935 --rc genhtml_legend=1 00:16:55.935 --rc geninfo_all_blocks=1 00:16:55.935 --rc geninfo_unexecuted_blocks=1 00:16:55.935 00:16:55.935 ' 00:16:55.935 12:20:26 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:55.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.935 --rc genhtml_branch_coverage=1 00:16:55.935 --rc genhtml_function_coverage=1 00:16:55.935 --rc genhtml_legend=1 00:16:55.935 --rc geninfo_all_blocks=1 00:16:55.936 --rc geninfo_unexecuted_blocks=1 00:16:55.936 00:16:55.936 ' 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:16:55.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72291 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72291 00:16:55.936 12:20:26 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72291 ']' 00:16:55.936 12:20:26 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.936 12:20:26 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.936 12:20:26 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.936 12:20:26 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:55.936 12:20:26 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.936 12:20:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:55.936 [2024-12-05 12:20:26.707575] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:55.936 [2024-12-05 12:20:26.707776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72291 ] 00:16:56.226 [2024-12-05 12:20:26.875650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.226 [2024-12-05 12:20:27.026390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.223 12:20:27 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.223 12:20:27 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:16:57.223 12:20:27 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:16:57.223 12:20:27 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:16:57.223 12:20:27 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:57.223 12:20:27 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:57.223 12:20:27 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:57.795 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:58.057 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:58.319 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:58.319 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:16:58.319 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:58.319 12:20:28 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:58.319 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:58.320 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:58.320 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:58.320 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:58.320 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:58.320 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:58.320 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:58.320 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:58.320 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:58.320 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:58.320 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:58.320 12:20:28 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.320 12:20:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.320 12:20:28 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:16:58.320 nvme0n1 00:16:58.320 nvme0n2 00:16:58.320 nvme0n3 00:16:58.320 nvme1n1 00:16:58.320 nvme2n1 00:16:58.320 nvme3n1 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.320 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.320 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:16:58.320 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.320 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.320 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.320 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:16:58.320 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.320 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.320 12:20:29 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.320 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:16:58.320 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:16:58.320 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "27dd947a-1f55-4c47-81a0-83795fc8ca64"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "27dd947a-1f55-4c47-81a0-83795fc8ca64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "7326a582-ac05-4783-b722-9021559a8a30"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7326a582-ac05-4783-b722-9021559a8a30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "b03b8401-ed29-485f-8696-c62dfdf24acf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b03b8401-ed29-485f-8696-c62dfdf24acf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "fc15fea7-5fe9-4227-8355-d6ca8337244d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fc15fea7-5fe9-4227-8355-d6ca8337244d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "36649b82-c8d2-459e-815f-2df20384d4c2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "36649b82-c8d2-459e-815f-2df20384d4c2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "df0c118c-8153-4e26-b163-7b76de5aa596"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "df0c118c-8153-4e26-b163-7b76de5aa596",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:58.581 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:16:58.581 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:16:58.581 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:16:58.581 12:20:29 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72291 00:16:58.581 12:20:29 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72291 ']' 00:16:58.581 12:20:29 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72291 00:16:58.581 12:20:29 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:16:58.581 12:20:29 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.581 12:20:29 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72291 00:16:58.581 killing process with pid 72291 00:16:58.581 12:20:29 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.581 12:20:29 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.581 12:20:29 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72291' 00:16:58.581 12:20:29 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72291 00:16:58.581 12:20:29 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72291 00:17:00.494 12:20:31 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:00.494 12:20:31 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:00.494 12:20:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:00.494 12:20:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.494 12:20:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:00.494 ************************************ 00:17:00.494 START TEST bdev_hello_world 00:17:00.494 ************************************ 00:17:00.494 12:20:31 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:00.494 [2024-12-05 12:20:31.160266] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:00.494 [2024-12-05 12:20:31.160438] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72575 ] 00:17:00.494 [2024-12-05 12:20:31.325763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.755 [2024-12-05 12:20:31.479492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.326 [2024-12-05 12:20:31.933632] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:01.326 [2024-12-05 12:20:31.933702] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:17:01.326 [2024-12-05 12:20:31.933722] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:01.326 [2024-12-05 12:20:31.936060] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:01.326 [2024-12-05 12:20:31.936769] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:01.326 [2024-12-05 12:20:31.937047] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:01.326 [2024-12-05 12:20:31.937991] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:01.326 00:17:01.326 [2024-12-05 12:20:31.938053] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:02.270 00:17:02.270 real 0m1.738s 00:17:02.270 user 0m1.297s 00:17:02.270 sys 0m0.286s 00:17:02.270 ************************************ 00:17:02.270 END TEST bdev_hello_world 00:17:02.270 ************************************ 00:17:02.270 12:20:32 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.270 12:20:32 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:02.270 12:20:32 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:17:02.270 12:20:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:02.270 12:20:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.270 12:20:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:02.270 ************************************ 00:17:02.270 START TEST bdev_bounds 00:17:02.270 ************************************ 00:17:02.270 12:20:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:17:02.270 Process bdevio pid: 72618 00:17:02.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.270 12:20:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72618 00:17:02.270 12:20:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:02.270 12:20:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72618' 00:17:02.271 12:20:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72618 00:17:02.271 12:20:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72618 ']' 00:17:02.271 12:20:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.271 12:20:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.271 12:20:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:02.271 12:20:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.271 12:20:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.271 12:20:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:02.271 [2024-12-05 12:20:32.973246] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:02.271 [2024-12-05 12:20:32.973415] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72618 ] 00:17:02.531 [2024-12-05 12:20:33.141959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:02.531 [2024-12-05 12:20:33.294172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.531 [2024-12-05 12:20:33.294536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.531 [2024-12-05 12:20:33.294611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.103 12:20:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.103 12:20:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:17:03.103 12:20:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:03.103 I/O targets: 00:17:03.103 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:03.103 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:03.103 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:03.103 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:03.103 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:03.103 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:03.103 00:17:03.103 00:17:03.103 CUnit - A unit testing framework for C - Version 2.1-3 00:17:03.103 http://cunit.sourceforge.net/ 00:17:03.103 00:17:03.103 00:17:03.103 Suite: bdevio tests on: nvme3n1 00:17:03.103 Test: blockdev write read block ...passed 00:17:03.103 Test: blockdev write zeroes read block ...passed 00:17:03.103 Test: blockdev write zeroes read no split ...passed 00:17:03.103 Test: blockdev write zeroes read split ...passed 00:17:03.365 Test: blockdev write zeroes read split partial ...passed 00:17:03.365 Test: blockdev reset ...passed 00:17:03.365 Test: blockdev write read 8 blocks ...passed 00:17:03.365 Test: blockdev write read size > 128k ...passed 00:17:03.365 Test: blockdev write read invalid size ...passed 00:17:03.365 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.365 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.365 Test: blockdev write read max offset ...passed 00:17:03.365 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.365 Test: blockdev writev readv 8 blocks ...passed 00:17:03.365 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.365 Test: blockdev writev readv block ...passed 00:17:03.365 Test: blockdev writev readv size > 128k ...passed 00:17:03.365 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.365 Test: blockdev comparev and writev ...passed 00:17:03.365 Test: blockdev nvme passthru rw ...passed 00:17:03.365 Test: blockdev nvme passthru vendor specific ...passed 00:17:03.365 Test: blockdev nvme admin passthru ...passed 00:17:03.365 Test: blockdev copy ...passed 00:17:03.365 Suite: bdevio tests on: nvme2n1 00:17:03.365 Test: blockdev write read block ...passed 00:17:03.365 Test: blockdev write zeroes read block ...passed 00:17:03.365 Test: blockdev write zeroes read no split ...passed 00:17:03.365 Test: blockdev write zeroes read split ...passed 00:17:03.365 Test: blockdev write zeroes read split partial ...passed 00:17:03.365 Test: blockdev reset ...passed 00:17:03.365 Test: blockdev write read 8 blocks ...passed 00:17:03.365 Test: blockdev write read size > 128k ...passed 00:17:03.365 Test: blockdev write read invalid size ...passed 00:17:03.365 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.366 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.366 Test: blockdev write read max offset ...passed 00:17:03.366 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.366 Test: blockdev writev readv 8 blocks ...passed 00:17:03.366 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.366 Test: blockdev writev readv block ...passed 00:17:03.366 Test: blockdev writev readv size > 128k ...passed 00:17:03.366 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.366 Test: blockdev comparev and writev ...passed 00:17:03.366 Test: blockdev nvme passthru rw ...passed 00:17:03.366 Test: blockdev nvme passthru vendor specific ...passed 00:17:03.366 Test: blockdev nvme admin passthru ...passed 00:17:03.366 Test: blockdev copy ...passed 00:17:03.366 Suite: bdevio tests on: nvme1n1 00:17:03.366 Test: blockdev write read block ...passed 00:17:03.366 Test: blockdev write zeroes read block ...passed 00:17:03.366 Test: blockdev write zeroes read no split ...passed 00:17:03.366 Test: blockdev write zeroes read split ...passed 00:17:03.366 Test: blockdev write zeroes read split partial ...passed 00:17:03.366 Test: blockdev reset ...passed 00:17:03.366 Test: blockdev write read 8 blocks ...passed 00:17:03.366 Test: blockdev write read size > 128k ...passed 00:17:03.366 Test: blockdev write read invalid size ...passed 00:17:03.366 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.366 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.366 Test: blockdev write read max offset ...passed 00:17:03.366 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.366 Test: blockdev writev readv 8 blocks ...passed 00:17:03.366 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.366 Test: blockdev writev readv block ...passed 00:17:03.366 Test: blockdev writev readv size > 128k ...passed 00:17:03.366 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.366 Test: blockdev comparev and writev ...passed 00:17:03.366 Test: blockdev nvme passthru rw ...passed 00:17:03.366 Test: blockdev nvme passthru vendor specific ...passed 00:17:03.366 Test: blockdev nvme admin passthru ...passed 00:17:03.366 Test: blockdev copy ...passed 00:17:03.366 Suite: bdevio tests on: nvme0n3 00:17:03.366 Test: blockdev write read block ...passed 00:17:03.366 Test: blockdev write zeroes read block ...passed 00:17:03.366 Test: blockdev write zeroes read no split ...passed 00:17:03.366 Test: blockdev write zeroes read split ...passed 00:17:03.628 Test: blockdev write zeroes read split partial ...passed 00:17:03.628 Test: blockdev reset ...passed 00:17:03.628 Test: blockdev write read 8 blocks ...passed 00:17:03.628 Test: blockdev write read size > 128k ...passed 00:17:03.628 Test: blockdev write read invalid size ...passed 00:17:03.628 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.628 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.628 Test: blockdev write read max offset ...passed 00:17:03.628 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.628 Test: blockdev writev readv 8 blocks ...passed 00:17:03.628 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.628 Test: blockdev writev readv block ...passed 00:17:03.628 Test: blockdev writev readv size > 128k ...passed 00:17:03.628 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.628 Test: blockdev comparev and writev ...passed 00:17:03.628 Test: blockdev nvme passthru rw ...passed 00:17:03.628 Test: blockdev nvme passthru vendor specific ...passed 00:17:03.628 Test: blockdev nvme admin passthru ...passed 00:17:03.628 Test: blockdev copy ...passed 00:17:03.628 Suite: bdevio tests on: nvme0n2 00:17:03.628 Test: blockdev write read block ...passed 00:17:03.628 Test: blockdev write zeroes read block ...passed 00:17:03.628 Test: blockdev write zeroes read no split ...passed 00:17:03.628 Test: blockdev write zeroes read split ...passed 00:17:03.628 Test: blockdev write zeroes read split partial ...passed 00:17:03.628 Test: blockdev reset ...passed 00:17:03.628 Test: blockdev write read 8 blocks ...passed 00:17:03.628 Test: blockdev write read size > 128k ...passed 00:17:03.628 Test: blockdev write read invalid size ...passed 00:17:03.628 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.628 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.628 Test: blockdev write read max offset ...passed 00:17:03.628 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.628 Test: blockdev writev readv 8 blocks ...passed 00:17:03.628 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.628 Test: blockdev writev readv block ...passed 00:17:03.628 Test: blockdev writev readv size > 128k ...passed 00:17:03.628 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.628 Test: blockdev comparev and writev ...passed 00:17:03.628 Test: blockdev nvme passthru rw ...passed 00:17:03.628 Test: blockdev nvme passthru vendor specific ...passed 00:17:03.628 Test: blockdev nvme admin passthru ...passed 00:17:03.628 Test: blockdev copy ...passed 00:17:03.628 Suite: bdevio tests on: nvme0n1 00:17:03.628 Test: blockdev write read block ...passed 00:17:03.628 Test: blockdev write zeroes read block ...passed 00:17:03.628 Test: blockdev write zeroes read no split ...passed 00:17:03.628 Test: blockdev write zeroes read split ...passed 00:17:03.628 Test: blockdev write zeroes read split partial ...passed 00:17:03.628 Test: blockdev reset ...passed 00:17:03.628 Test: blockdev write read 8 blocks ...passed 00:17:03.628 Test: blockdev write read size > 128k ...passed 00:17:03.628 Test: blockdev write read invalid size ...passed 00:17:03.628 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.628 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.628 Test: blockdev write read max offset ...passed 00:17:03.628 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.628 Test: blockdev writev readv 8 blocks ...passed 00:17:03.628 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.628 Test: blockdev writev readv block ...passed 00:17:03.628 Test: blockdev writev readv size > 128k ...passed 00:17:03.628 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.628 Test: blockdev comparev and writev ...passed 00:17:03.628 Test: blockdev nvme passthru rw ...passed 00:17:03.628 Test: blockdev nvme passthru vendor specific ...passed 00:17:03.628 Test: blockdev nvme admin passthru ...passed 00:17:03.628 Test: blockdev copy ...passed 00:17:03.628 00:17:03.628 Run Summary: Type Total Ran Passed Failed Inactive 00:17:03.628 suites 6 6 n/a 0 0 00:17:03.628 tests 138 138 138 0 0 00:17:03.628 asserts 780 780 780 0 n/a 00:17:03.628 00:17:03.628 Elapsed time = 1.275 seconds 00:17:03.629 0 00:17:03.629 12:20:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72618 00:17:03.629 12:20:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72618 ']' 00:17:03.629 12:20:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72618 00:17:03.629 12:20:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:17:03.629 12:20:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.629 12:20:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72618 00:17:03.629 killing process with pid 72618 00:17:03.629 12:20:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.629 12:20:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.629 12:20:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72618' 00:17:03.629 12:20:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72618 00:17:03.629 12:20:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72618 00:17:04.572 12:20:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:04.572 00:17:04.572 real 0m2.402s 00:17:04.572 user 0m5.764s 00:17:04.572 sys 0m0.435s 00:17:04.572 ************************************ 00:17:04.572 END TEST bdev_bounds 00:17:04.572 ************************************ 00:17:04.572 12:20:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.572 12:20:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:04.572 12:20:35 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:17:04.572 12:20:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:04.572 12:20:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.572 12:20:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:04.572 ************************************ 00:17:04.572 START TEST bdev_nbd 00:17:04.572 ************************************ 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:04.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72676 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72676 /var/tmp/spdk-nbd.sock 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72676 ']' 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.572 12:20:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:04.572 [2024-12-05 12:20:35.430157] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:04.572 [2024-12-05 12:20:35.430422] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.830 [2024-12-05 12:20:35.593842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.088 [2024-12-05 12:20:35.702356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.653 1+0 records in 00:17:05.653 1+0 records out 00:17:05.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475867 s, 8.6 MB/s 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:05.653 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.911 1+0 records in 00:17:05.911 1+0 records out 00:17:05.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651675 s, 6.3 MB/s 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:05.911 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.169 1+0 records in 00:17:06.169 1+0 records out 00:17:06.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775749 s, 5.3 MB/s 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:06.169 12:20:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.427 1+0 records in 00:17:06.427 1+0 records out 00:17:06.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104344 s, 3.9 MB/s 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:06.427 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.685 1+0 records in 00:17:06.685 1+0 records out 00:17:06.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000862006 s, 4.8 MB/s 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:06.685 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.944 1+0 records in 00:17:06.944 1+0 records out 00:17:06.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000801228 s, 5.1 MB/s 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:06.944 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:07.203 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:07.203 { 00:17:07.203 "nbd_device": "/dev/nbd0", 00:17:07.203 "bdev_name": "nvme0n1" 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "nbd_device": "/dev/nbd1", 00:17:07.203 "bdev_name": "nvme0n2" 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "nbd_device": "/dev/nbd2", 00:17:07.203 "bdev_name": "nvme0n3" 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "nbd_device": "/dev/nbd3", 00:17:07.203 "bdev_name": "nvme1n1" 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "nbd_device": "/dev/nbd4", 00:17:07.203 "bdev_name": "nvme2n1" 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "nbd_device": "/dev/nbd5", 00:17:07.203 "bdev_name": "nvme3n1" 00:17:07.203 } 00:17:07.203 ]' 00:17:07.203 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:07.203 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:07.203 { 00:17:07.203 "nbd_device": "/dev/nbd0", 00:17:07.203 "bdev_name": "nvme0n1" 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "nbd_device": "/dev/nbd1", 00:17:07.203 "bdev_name": "nvme0n2" 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "nbd_device": "/dev/nbd2", 00:17:07.203 "bdev_name": "nvme0n3" 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "nbd_device": "/dev/nbd3", 00:17:07.203 "bdev_name": "nvme1n1" 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "nbd_device": "/dev/nbd4", 00:17:07.203 "bdev_name": "nvme2n1" 00:17:07.203 }, 00:17:07.203 { 00:17:07.203 "nbd_device": "/dev/nbd5", 00:17:07.203 "bdev_name": "nvme3n1" 00:17:07.203 } 00:17:07.203 ]' 00:17:07.203 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:07.203 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:07.203 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:07.203 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:07.203 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:07.203 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:07.203 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.203 12:20:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:07.462 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:07.462 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:07.462 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:07.462 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.462 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.462 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:07.462 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:07.462 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.462 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.462 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:07.462 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.720 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:07.978 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:07.978 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:07.978 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:07.978 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.978 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.978 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:07.978 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:07.978 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.978 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.978 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:08.236 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:08.236 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:08.236 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:08.236 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.236 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.236 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:08.236 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:08.236 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.236 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.236 12:20:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:08.496 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:08.496 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:08.496 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:08.496 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.496 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.496 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:08.496 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:08.496 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.496 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:08.496 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:08.496 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:08.755 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:17:09.015 /dev/nbd0 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:09.015 1+0 records in 00:17:09.015 1+0 records out 00:17:09.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063183 s, 6.5 MB/s 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:17:09.015 /dev/nbd1 00:17:09.015 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:09.276 1+0 records in 00:17:09.276 1+0 records out 00:17:09.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643869 s, 6.4 MB/s 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:09.276 12:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:17:09.276 /dev/nbd10 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:09.276 1+0 records in 00:17:09.276 1+0 records out 00:17:09.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106384 s, 3.9 MB/s 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:09.276 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:17:09.539 /dev/nbd11 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:09.539 1+0 records in 00:17:09.539 1+0 records out 00:17:09.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000823492 s, 5.0 MB/s 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:09.539 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:17:09.800 /dev/nbd12 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:09.800 1+0 records in 00:17:09.800 1+0 records out 00:17:09.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122832 s, 3.3 MB/s 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:09.800 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:17:10.061 /dev/nbd13 00:17:10.061 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:10.061 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:10.061 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:17:10.061 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:10.061 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:10.061 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:10.061 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:17:10.061 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:10.061 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:10.061 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:10.061 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.061 1+0 records in 00:17:10.062 1+0 records out 00:17:10.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000970001 s, 4.2 MB/s 00:17:10.062 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.062 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:10.062 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.062 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:10.062 12:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:10.062 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.062 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:10.062 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:10.062 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:10.062 12:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:10.324 { 00:17:10.324 "nbd_device": "/dev/nbd0", 00:17:10.324 "bdev_name": "nvme0n1" 00:17:10.324 }, 00:17:10.324 { 00:17:10.324 "nbd_device": "/dev/nbd1", 00:17:10.324 "bdev_name": "nvme0n2" 00:17:10.324 }, 00:17:10.324 { 00:17:10.324 "nbd_device": "/dev/nbd10", 00:17:10.324 "bdev_name": "nvme0n3" 00:17:10.324 }, 00:17:10.324 { 00:17:10.324 "nbd_device": "/dev/nbd11", 00:17:10.324 "bdev_name": "nvme1n1" 00:17:10.324 }, 00:17:10.324 { 00:17:10.324 "nbd_device": "/dev/nbd12", 00:17:10.324 "bdev_name": "nvme2n1" 00:17:10.324 }, 00:17:10.324 { 00:17:10.324 "nbd_device": "/dev/nbd13", 00:17:10.324 "bdev_name": "nvme3n1" 00:17:10.324 } 00:17:10.324 ]' 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:10.324 { 00:17:10.324 "nbd_device": "/dev/nbd0", 00:17:10.324 "bdev_name": "nvme0n1" 00:17:10.324 }, 00:17:10.324 { 00:17:10.324 "nbd_device": "/dev/nbd1", 00:17:10.324 "bdev_name": "nvme0n2" 00:17:10.324 }, 00:17:10.324 { 00:17:10.324 "nbd_device": "/dev/nbd10", 00:17:10.324 "bdev_name": "nvme0n3" 00:17:10.324 }, 00:17:10.324 { 00:17:10.324 "nbd_device": "/dev/nbd11", 00:17:10.324 "bdev_name": "nvme1n1" 00:17:10.324 }, 00:17:10.324 { 00:17:10.324 "nbd_device": "/dev/nbd12", 00:17:10.324 "bdev_name": "nvme2n1" 00:17:10.324 }, 00:17:10.324 { 00:17:10.324 "nbd_device": "/dev/nbd13", 00:17:10.324 "bdev_name": "nvme3n1" 00:17:10.324 } 00:17:10.324 ]' 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:10.324 /dev/nbd1 00:17:10.324 /dev/nbd10 00:17:10.324 /dev/nbd11 00:17:10.324 /dev/nbd12 00:17:10.324 /dev/nbd13' 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:10.324 /dev/nbd1 00:17:10.324 /dev/nbd10 00:17:10.324 /dev/nbd11 00:17:10.324 /dev/nbd12 00:17:10.324 /dev/nbd13' 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:10.324 256+0 records in 00:17:10.324 256+0 records out 00:17:10.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0083008 s, 126 MB/s 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:10.324 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:10.585 256+0 records in 00:17:10.585 256+0 records out 00:17:10.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236245 s, 4.4 MB/s 00:17:10.585 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:10.585 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:10.845 256+0 records in 00:17:10.845 256+0 records out 00:17:10.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244793 s, 4.3 MB/s 00:17:10.845 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:10.845 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:11.105 256+0 records in 00:17:11.105 256+0 records out 00:17:11.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205956 s, 5.1 MB/s 00:17:11.105 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:11.105 12:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:11.366 256+0 records in 00:17:11.366 256+0 records out 00:17:11.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.317882 s, 3.3 MB/s 00:17:11.366 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:11.366 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:11.627 256+0 records in 00:17:11.627 256+0 records out 00:17:11.627 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.249832 s, 4.2 MB/s 00:17:11.627 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:11.627 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:11.887 256+0 records in 00:17:11.887 256+0 records out 00:17:11.887 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.207333 s, 5.1 MB/s 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:11.887 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:12.146 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:12.146 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:12.146 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:12.146 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.146 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.146 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:12.146 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:12.146 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.146 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.147 12:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:12.405 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:12.405 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:12.405 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:12.405 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.405 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.405 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:12.405 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:12.405 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.405 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.405 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:12.662 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:12.662 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:12.662 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:12.662 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.663 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.663 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:12.663 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:12.663 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.663 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.663 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:12.921 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:12.921 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:12.921 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:12.921 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.921 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.921 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:12.921 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:12.922 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.922 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.922 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:13.181 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:13.181 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:13.181 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:13.181 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.181 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.181 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:13.181 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:13.181 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.181 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.181 12:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:13.441 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:13.441 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:13.441 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:13.441 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.441 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.441 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:13.441 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:13.441 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.441 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:13.441 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:13.441 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:13.700 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:13.959 malloc_lvol_verify 00:17:13.959 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:13.959 0b96db4a-43f2-4ec0-a8dd-17c2994c6a84 00:17:13.959 12:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:14.218 f9eb9b07-2308-4672-b1e2-cc9bb2a76b07 00:17:14.218 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:14.491 /dev/nbd0 00:17:14.491 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:14.491 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:14.491 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:14.491 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:14.492 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:14.492 mke2fs 1.47.0 (5-Feb-2023) 00:17:14.492 Discarding device blocks: 0/4096 done 00:17:14.492 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:14.492 00:17:14.492 Allocating group tables: 0/1 done 00:17:14.492 Writing inode tables: 0/1 done 00:17:14.492 Creating journal (1024 blocks): done 00:17:14.492 Writing superblocks and filesystem accounting information: 0/1 done 00:17:14.492 00:17:14.492 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:14.492 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:14.492 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:14.492 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:14.492 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:14.492 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:14.492 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72676 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72676 ']' 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72676 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72676 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.758 killing process with pid 72676 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72676' 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72676 00:17:14.758 12:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72676 00:17:15.697 12:20:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:15.697 00:17:15.697 real 0m11.130s 00:17:15.697 user 0m14.850s 00:17:15.697 sys 0m3.738s 00:17:15.697 12:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.697 ************************************ 00:17:15.697 END TEST bdev_nbd 00:17:15.697 ************************************ 00:17:15.697 12:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:15.697 12:20:46 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:17:15.697 12:20:46 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:17:15.697 12:20:46 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:17:15.697 12:20:46 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:17:15.697 12:20:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:15.697 12:20:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.697 12:20:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.697 ************************************ 00:17:15.697 START TEST bdev_fio 00:17:15.697 ************************************ 00:17:15.697 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:15.697 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:15.697 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:15.697 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:15.697 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:15.957 ************************************ 00:17:15.957 START TEST bdev_fio_rw_verify 00:17:15.957 ************************************ 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:15.957 12:20:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:16.217 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:16.217 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:16.217 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:16.217 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:16.217 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:16.217 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:16.217 fio-3.35 00:17:16.217 Starting 6 threads 00:17:28.554 00:17:28.554 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=73088: Thu Dec 5 12:20:57 2024 00:17:28.554 read: IOPS=13.5k, BW=52.9MiB/s (55.4MB/s)(529MiB/10002msec) 00:17:28.554 slat (usec): min=2, max=2116, avg= 7.67, stdev=17.56 00:17:28.554 clat (usec): min=97, max=7394, avg=1423.90, stdev=749.60 00:17:28.554 lat (usec): min=102, max=7407, avg=1431.57, stdev=750.23 00:17:28.554 clat percentiles (usec): 00:17:28.554 | 50.000th=[ 1336], 99.000th=[ 3720], 99.900th=[ 5407], 99.990th=[ 6521], 00:17:28.554 | 99.999th=[ 7308] 00:17:28.554 write: IOPS=13.9k, BW=54.2MiB/s (56.8MB/s)(542MiB/10002msec); 0 zone resets 00:17:28.554 slat (usec): min=10, max=3780, avg=43.39, stdev=143.94 00:17:28.554 clat (usec): min=120, max=9129, avg=1730.05, stdev=827.02 00:17:28.554 lat (usec): min=144, max=9162, avg=1773.44, stdev=839.04 00:17:28.554 clat percentiles (usec): 00:17:28.554 | 50.000th=[ 1598], 99.000th=[ 4293], 99.900th=[ 5604], 99.990th=[ 7308], 00:17:28.554 | 99.999th=[ 9110] 00:17:28.554 bw ( KiB/s): min=48531, max=64538, per=100.00%, avg=55458.79, stdev=950.85, samples=114 00:17:28.554 iops : min=12129, max=16133, avg=13863.58, stdev=237.79, samples=114 00:17:28.554 lat (usec) : 100=0.01%, 250=0.85%, 500=4.80%, 750=7.59%, 1000=10.45% 00:17:28.554 lat (msec) : 2=51.54%, 4=23.66%, 10=1.12% 00:17:28.554 cpu : usr=43.64%, sys=32.61%, ctx=5327, majf=0, minf=14080 00:17:28.554 IO depths : 1=11.3%, 2=23.7%, 4=51.2%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:28.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.554 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.554 issued rwts: total=135375,138667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.554 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:28.554 00:17:28.554 Run status group 0 (all jobs): 00:17:28.554 READ: bw=52.9MiB/s (55.4MB/s), 52.9MiB/s-52.9MiB/s (55.4MB/s-55.4MB/s), io=529MiB (554MB), run=10002-10002msec 00:17:28.554 WRITE: bw=54.2MiB/s (56.8MB/s), 54.2MiB/s-54.2MiB/s (56.8MB/s-56.8MB/s), io=542MiB (568MB), run=10002-10002msec 00:17:28.554 ----------------------------------------------------- 00:17:28.554 Suppressions used: 00:17:28.554 count bytes template 00:17:28.554 6 48 /usr/src/fio/parse.c 00:17:28.554 3207 307872 /usr/src/fio/iolog.c 00:17:28.554 1 8 libtcmalloc_minimal.so 00:17:28.554 1 904 libcrypto.so 00:17:28.554 ----------------------------------------------------- 00:17:28.554 00:17:28.554 00:17:28.554 real 0m12.187s 00:17:28.554 user 0m27.803s 00:17:28.554 sys 0m19.984s 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.554 ************************************ 00:17:28.554 END TEST bdev_fio_rw_verify 00:17:28.554 ************************************ 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:28.554 12:20:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:28.555 12:20:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "27dd947a-1f55-4c47-81a0-83795fc8ca64"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "27dd947a-1f55-4c47-81a0-83795fc8ca64",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "7326a582-ac05-4783-b722-9021559a8a30"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7326a582-ac05-4783-b722-9021559a8a30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "b03b8401-ed29-485f-8696-c62dfdf24acf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b03b8401-ed29-485f-8696-c62dfdf24acf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "fc15fea7-5fe9-4227-8355-d6ca8337244d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fc15fea7-5fe9-4227-8355-d6ca8337244d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "36649b82-c8d2-459e-815f-2df20384d4c2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "36649b82-c8d2-459e-815f-2df20384d4c2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "df0c118c-8153-4e26-b163-7b76de5aa596"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "df0c118c-8153-4e26-b163-7b76de5aa596",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:28.555 12:20:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:28.555 12:20:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:28.555 /home/vagrant/spdk_repo/spdk 00:17:28.555 12:20:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:28.555 12:20:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:28.555 12:20:58 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:28.555 00:17:28.555 real 0m12.378s 00:17:28.555 user 0m27.885s 00:17:28.555 sys 0m20.072s 00:17:28.555 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.555 ************************************ 00:17:28.555 END TEST bdev_fio 00:17:28.555 ************************************ 00:17:28.555 12:20:58 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:28.555 12:20:58 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:28.555 12:20:58 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:28.555 12:20:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:28.555 12:20:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.555 12:20:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:28.555 ************************************ 00:17:28.555 START TEST bdev_verify 00:17:28.555 ************************************ 00:17:28.555 12:20:59 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:28.555 [2024-12-05 12:20:59.088311] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:28.555 [2024-12-05 12:20:59.088449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73261 ] 00:17:28.555 [2024-12-05 12:20:59.253939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:28.555 [2024-12-05 12:20:59.402775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.555 [2024-12-05 12:20:59.402868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.128 Running I/O for 5 seconds... 00:17:31.460 23456.00 IOPS, 91.62 MiB/s [2024-12-05T12:21:03.274Z] 23312.00 IOPS, 91.06 MiB/s [2024-12-05T12:21:04.217Z] 23861.33 IOPS, 93.21 MiB/s [2024-12-05T12:21:05.161Z] 23424.00 IOPS, 91.50 MiB/s [2024-12-05T12:21:05.161Z] 23571.20 IOPS, 92.08 MiB/s 00:17:34.293 Latency(us) 00:17:34.293 [2024-12-05T12:21:05.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.293 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:34.293 Verification LBA range: start 0x0 length 0x80000 00:17:34.293 nvme0n1 : 5.06 1871.33 7.31 0.00 0.00 68273.32 9527.93 70577.23 00:17:34.293 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:34.293 Verification LBA range: start 0x80000 length 0x80000 00:17:34.293 nvme0n1 : 5.03 1805.23 7.05 0.00 0.00 70768.33 9779.99 79046.50 00:17:34.293 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:34.293 Verification LBA range: start 0x0 length 0x80000 00:17:34.293 nvme0n2 : 5.02 1861.39 7.27 0.00 0.00 68481.65 10233.70 66544.25 00:17:34.293 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:34.293 Verification LBA range: start 0x80000 length 0x80000 00:17:34.293 nvme0n2 : 5.04 1779.24 6.95 0.00 0.00 71646.47 9679.16 74610.22 00:17:34.293 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:34.293 Verification LBA range: start 0x0 length 0x80000 00:17:34.293 nvme0n3 : 5.08 1864.79 7.28 0.00 0.00 68233.73 11090.71 66947.54 00:17:34.293 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:34.293 Verification LBA range: start 0x80000 length 0x80000 00:17:34.293 nvme0n3 : 5.08 1787.37 6.98 0.00 0.00 71184.02 12351.02 66947.54 00:17:34.293 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:34.293 Verification LBA range: start 0x0 length 0xbd0bd 00:17:34.293 nvme1n1 : 5.08 2477.92 9.68 0.00 0.00 51223.51 5016.02 66947.54 00:17:34.293 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:34.293 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:34.293 nvme1n1 : 5.08 2408.43 9.41 0.00 0.00 52700.22 5847.83 60898.07 00:17:34.293 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:34.293 Verification LBA range: start 0x0 length 0xa0000 00:17:34.293 nvme2n1 : 5.08 1914.41 7.48 0.00 0.00 66233.53 5948.65 74206.92 00:17:34.293 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:34.293 Verification LBA range: start 0xa0000 length 0xa0000 00:17:34.293 nvme2n1 : 5.08 1814.65 7.09 0.00 0.00 69663.29 9679.16 73400.32 00:17:34.293 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:34.293 Verification LBA range: start 0x0 length 0x20000 00:17:34.293 nvme3n1 : 5.08 1888.62 7.38 0.00 0.00 66973.33 4637.93 69770.63 00:17:34.293 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:34.293 Verification LBA range: start 0x20000 length 0x20000 00:17:34.293 nvme3n1 : 5.09 1810.27 7.07 0.00 0.00 69736.35 7057.72 75416.81 00:17:34.293 [2024-12-05T12:21:05.162Z] =================================================================================================================== 00:17:34.293 [2024-12-05T12:21:05.162Z] Total : 23283.65 90.95 0.00 0.00 65477.25 4637.93 79046.50 00:17:35.233 00:17:35.233 real 0m6.911s 00:17:35.233 user 0m11.013s 00:17:35.233 sys 0m1.595s 00:17:35.233 12:21:05 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.233 ************************************ 00:17:35.233 END TEST bdev_verify 00:17:35.233 ************************************ 00:17:35.233 12:21:05 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:35.233 12:21:05 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:35.233 12:21:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:35.233 12:21:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.233 12:21:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:35.233 ************************************ 00:17:35.233 START TEST bdev_verify_big_io 00:17:35.233 ************************************ 00:17:35.234 12:21:05 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:35.234 [2024-12-05 12:21:06.076430] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:35.234 [2024-12-05 12:21:06.076610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73363 ] 00:17:35.494 [2024-12-05 12:21:06.242715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:35.755 [2024-12-05 12:21:06.391741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.755 [2024-12-05 12:21:06.391836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.328 Running I/O for 5 seconds... 00:17:41.924 2024.00 IOPS, 126.50 MiB/s [2024-12-05T12:21:13.365Z] 2680.00 IOPS, 167.50 MiB/s [2024-12-05T12:21:13.365Z] 2888.00 IOPS, 180.50 MiB/s 00:17:42.496 Latency(us) 00:17:42.496 [2024-12-05T12:21:13.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.496 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:42.496 Verification LBA range: start 0x0 length 0x8000 00:17:42.496 nvme0n1 : 6.14 65.18 4.07 0.00 0.00 1831416.63 290374.89 1961643.72 00:17:42.496 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:42.496 Verification LBA range: start 0x8000 length 0x8000 00:17:42.496 nvme0n1 : 5.71 123.28 7.71 0.00 0.00 1011714.29 162932.58 1548666.09 00:17:42.496 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:42.496 Verification LBA range: start 0x0 length 0x8000 00:17:42.496 nvme0n2 : 5.43 94.26 5.89 0.00 0.00 1247854.10 6074.68 1232480.10 00:17:42.496 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:42.496 Verification LBA range: start 0x8000 length 0x8000 00:17:42.496 nvme0n2 : 5.81 141.86 8.87 0.00 0.00 846296.28 7259.37 1148594.02 00:17:42.496 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:42.496 Verification LBA range: start 0x0 length 0x8000 00:17:42.496 nvme0n3 : 5.96 83.27 5.20 0.00 0.00 1345435.38 55655.19 1742249.35 00:17:42.496 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:42.496 Verification LBA range: start 0x8000 length 0x8000 00:17:42.496 nvme0n3 : 5.71 134.45 8.40 0.00 0.00 862667.49 17644.31 764653.88 00:17:42.496 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:42.496 Verification LBA range: start 0x0 length 0xbd0b 00:17:42.496 nvme1n1 : 5.98 128.43 8.03 0.00 0.00 831347.53 59688.17 1393799.48 00:17:42.496 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:42.496 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:42.496 nvme1n1 : 5.88 125.18 7.82 0.00 0.00 915131.33 28432.54 2064888.12 00:17:42.496 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:42.496 Verification LBA range: start 0x0 length 0xa000 00:17:42.496 nvme2n1 : 6.11 146.55 9.16 0.00 0.00 706090.35 2508.01 1677721.60 00:17:42.496 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:42.496 Verification LBA range: start 0xa000 length 0xa000 00:17:42.496 nvme2n1 : 5.81 141.79 8.86 0.00 0.00 788845.43 105664.20 1142141.24 00:17:42.496 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:42.496 Verification LBA range: start 0x0 length 0x2000 00:17:42.496 nvme3n1 : 6.29 241.69 15.11 0.00 0.00 411158.44 403.30 2335904.69 00:17:42.496 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:42.496 Verification LBA range: start 0x2000 length 0x2000 00:17:42.496 nvme3n1 : 5.89 130.46 8.15 0.00 0.00 842412.50 1587.99 2245565.83 00:17:42.496 [2024-12-05T12:21:13.365Z] =================================================================================================================== 00:17:42.496 [2024-12-05T12:21:13.365Z] Total : 1556.41 97.28 0.00 0.00 865618.30 403.30 2335904.69 00:17:43.440 00:17:43.440 real 0m8.239s 00:17:43.440 user 0m15.081s 00:17:43.440 sys 0m0.506s 00:17:43.440 12:21:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.440 12:21:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.440 ************************************ 00:17:43.440 END TEST bdev_verify_big_io 00:17:43.440 ************************************ 00:17:43.440 12:21:14 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:43.440 12:21:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:43.440 12:21:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.440 12:21:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:43.440 ************************************ 00:17:43.440 START TEST bdev_write_zeroes 00:17:43.440 ************************************ 00:17:43.440 12:21:14 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:43.702 [2024-12-05 12:21:14.364764] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:43.702 [2024-12-05 12:21:14.364881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73484 ] 00:17:43.702 [2024-12-05 12:21:14.527670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.962 [2024-12-05 12:21:14.634136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.223 Running I/O for 1 seconds... 00:17:45.607 68608.00 IOPS, 268.00 MiB/s 00:17:45.607 Latency(us) 00:17:45.607 [2024-12-05T12:21:16.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.607 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:45.607 nvme0n1 : 1.02 11060.33 43.20 0.00 0.00 11561.68 6906.49 23492.14 00:17:45.607 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:45.607 nvme0n2 : 1.02 11045.60 43.15 0.00 0.00 11567.48 7007.31 23996.26 00:17:45.607 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:45.607 nvme0n3 : 1.02 11030.27 43.09 0.00 0.00 11573.66 6956.90 24601.21 00:17:45.607 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:45.607 nvme1n1 : 1.03 12684.53 49.55 0.00 0.00 10054.51 4133.81 20164.92 00:17:45.607 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:45.607 nvme2n1 : 1.03 11178.54 43.67 0.00 0.00 11401.19 3806.13 22080.59 00:17:45.607 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:45.607 nvme3n1 : 1.03 11040.92 43.13 0.00 0.00 11482.15 4385.87 23996.26 00:17:45.607 [2024-12-05T12:21:16.476Z] =================================================================================================================== 00:17:45.607 [2024-12-05T12:21:16.476Z] Total : 68040.19 265.78 0.00 0.00 11242.98 3806.13 24601.21 00:17:46.180 00:17:46.180 real 0m2.606s 00:17:46.180 user 0m1.913s 00:17:46.180 sys 0m0.488s 00:17:46.180 12:21:16 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.180 ************************************ 00:17:46.180 12:21:16 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:46.180 END TEST bdev_write_zeroes 00:17:46.180 ************************************ 00:17:46.180 12:21:16 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:46.180 12:21:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:46.180 12:21:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.180 12:21:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:46.180 ************************************ 00:17:46.180 START TEST bdev_json_nonenclosed 00:17:46.180 ************************************ 00:17:46.180 12:21:16 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:46.180 [2024-12-05 12:21:17.024342] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:46.180 [2024-12-05 12:21:17.024457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73532 ] 00:17:46.440 [2024-12-05 12:21:17.186644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.440 [2024-12-05 12:21:17.294513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.440 [2024-12-05 12:21:17.294600] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:46.440 [2024-12-05 12:21:17.294618] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:46.440 [2024-12-05 12:21:17.294628] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:46.701 00:17:46.701 real 0m0.524s 00:17:46.701 user 0m0.312s 00:17:46.701 sys 0m0.106s 00:17:46.701 12:21:17 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.701 ************************************ 00:17:46.701 END TEST bdev_json_nonenclosed 00:17:46.701 12:21:17 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:46.701 ************************************ 00:17:46.701 12:21:17 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:46.701 12:21:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:46.701 12:21:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.701 12:21:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:46.701 ************************************ 00:17:46.701 START TEST bdev_json_nonarray 00:17:46.701 ************************************ 00:17:46.701 12:21:17 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:46.962 [2024-12-05 12:21:17.605165] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:46.962 [2024-12-05 12:21:17.605282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73557 ] 00:17:46.962 [2024-12-05 12:21:17.765611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.225 [2024-12-05 12:21:17.889035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.225 [2024-12-05 12:21:17.889139] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:47.225 [2024-12-05 12:21:17.889158] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:47.225 [2024-12-05 12:21:17.889169] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:47.225 00:17:47.225 real 0m0.534s 00:17:47.225 user 0m0.318s 00:17:47.225 sys 0m0.110s 00:17:47.225 12:21:18 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.225 ************************************ 00:17:47.225 12:21:18 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:47.225 END TEST bdev_json_nonarray 00:17:47.225 ************************************ 00:17:47.485 12:21:18 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:17:47.485 12:21:18 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:17:47.485 12:21:18 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:17:47.485 12:21:18 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:17:47.485 12:21:18 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:17:47.485 12:21:18 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:47.485 12:21:18 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:47.485 12:21:18 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:17:47.485 12:21:18 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:17:47.485 12:21:18 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:17:47.485 12:21:18 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:17:47.485 12:21:18 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:47.747 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:51.950 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:51.950 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:52.520 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:52.781 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:52.781 00:17:52.781 real 0m57.051s 00:17:52.781 user 1m24.041s 00:17:52.781 sys 0m36.759s 00:17:52.781 12:21:23 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.781 ************************************ 00:17:52.781 END TEST blockdev_xnvme 00:17:52.781 ************************************ 00:17:52.781 12:21:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:52.781 12:21:23 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:52.781 12:21:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:52.781 12:21:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.781 12:21:23 -- common/autotest_common.sh@10 -- # set +x 00:17:52.781 ************************************ 00:17:52.781 START TEST ublk 00:17:52.781 ************************************ 00:17:52.781 12:21:23 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:52.781 * Looking for test storage... 00:17:52.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:52.781 12:21:23 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:52.781 12:21:23 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:17:52.781 12:21:23 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:53.043 12:21:23 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:53.043 12:21:23 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.043 12:21:23 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.043 12:21:23 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.043 12:21:23 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.043 12:21:23 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.043 12:21:23 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.043 12:21:23 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.043 12:21:23 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.043 12:21:23 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.043 12:21:23 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.043 12:21:23 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.043 12:21:23 ublk -- scripts/common.sh@344 -- # case "$op" in 00:17:53.043 12:21:23 ublk -- scripts/common.sh@345 -- # : 1 00:17:53.043 12:21:23 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.043 12:21:23 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.043 12:21:23 ublk -- scripts/common.sh@365 -- # decimal 1 00:17:53.043 12:21:23 ublk -- scripts/common.sh@353 -- # local d=1 00:17:53.043 12:21:23 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.043 12:21:23 ublk -- scripts/common.sh@355 -- # echo 1 00:17:53.043 12:21:23 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.043 12:21:23 ublk -- scripts/common.sh@366 -- # decimal 2 00:17:53.043 12:21:23 ublk -- scripts/common.sh@353 -- # local d=2 00:17:53.043 12:21:23 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.043 12:21:23 ublk -- scripts/common.sh@355 -- # echo 2 00:17:53.043 12:21:23 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.043 12:21:23 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.043 12:21:23 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.043 12:21:23 ublk -- scripts/common.sh@368 -- # return 0 00:17:53.043 12:21:23 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.043 12:21:23 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:53.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.043 --rc genhtml_branch_coverage=1 00:17:53.043 --rc genhtml_function_coverage=1 00:17:53.043 --rc genhtml_legend=1 00:17:53.043 --rc geninfo_all_blocks=1 00:17:53.043 --rc geninfo_unexecuted_blocks=1 00:17:53.043 00:17:53.043 ' 00:17:53.043 12:21:23 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:53.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.043 --rc genhtml_branch_coverage=1 00:17:53.043 --rc genhtml_function_coverage=1 00:17:53.043 --rc genhtml_legend=1 00:17:53.043 --rc geninfo_all_blocks=1 00:17:53.043 --rc geninfo_unexecuted_blocks=1 00:17:53.043 00:17:53.043 ' 00:17:53.043 12:21:23 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:53.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.043 --rc genhtml_branch_coverage=1 00:17:53.043 --rc genhtml_function_coverage=1 00:17:53.043 --rc genhtml_legend=1 00:17:53.043 --rc geninfo_all_blocks=1 00:17:53.043 --rc geninfo_unexecuted_blocks=1 00:17:53.043 00:17:53.043 ' 00:17:53.043 12:21:23 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:53.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.043 --rc genhtml_branch_coverage=1 00:17:53.043 --rc genhtml_function_coverage=1 00:17:53.043 --rc genhtml_legend=1 00:17:53.043 --rc geninfo_all_blocks=1 00:17:53.043 --rc geninfo_unexecuted_blocks=1 00:17:53.043 00:17:53.043 ' 00:17:53.043 12:21:23 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:53.043 12:21:23 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:53.043 12:21:23 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:53.043 12:21:23 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:53.043 12:21:23 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:53.043 12:21:23 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:53.043 12:21:23 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:53.043 12:21:23 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:53.043 12:21:23 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:53.043 12:21:23 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:17:53.043 12:21:23 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:17:53.043 12:21:23 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:17:53.043 12:21:23 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:17:53.043 12:21:23 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:17:53.043 12:21:23 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:17:53.043 12:21:23 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:17:53.043 12:21:23 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:17:53.043 12:21:23 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:17:53.043 12:21:23 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:17:53.043 12:21:23 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:17:53.043 12:21:23 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.043 12:21:23 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.043 12:21:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:53.043 ************************************ 00:17:53.043 START TEST test_save_ublk_config 00:17:53.043 ************************************ 00:17:53.043 12:21:23 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:17:53.043 12:21:23 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:17:53.043 12:21:23 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73858 00:17:53.043 12:21:23 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:17:53.043 12:21:23 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73858 00:17:53.043 12:21:23 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:17:53.043 12:21:23 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73858 ']' 00:17:53.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.043 12:21:23 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.043 12:21:23 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.043 12:21:23 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.043 12:21:23 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.043 12:21:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:53.043 [2024-12-05 12:21:23.849213] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:53.044 [2024-12-05 12:21:23.849383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73858 ] 00:17:53.304 [2024-12-05 12:21:24.016915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.304 [2024-12-05 12:21:24.128607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.249 12:21:24 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.249 12:21:24 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:54.249 12:21:24 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:17:54.249 12:21:24 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:17:54.249 12:21:24 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.249 12:21:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:54.249 [2024-12-05 12:21:24.865498] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:54.249 [2024-12-05 12:21:24.866494] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:54.249 malloc0 00:17:54.249 [2024-12-05 12:21:24.945648] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:54.249 [2024-12-05 12:21:24.945757] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:54.249 [2024-12-05 12:21:24.945771] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:54.249 [2024-12-05 12:21:24.945780] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:54.249 [2024-12-05 12:21:24.954639] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:54.249 [2024-12-05 12:21:24.954669] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:54.249 [2024-12-05 12:21:24.961516] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:54.249 [2024-12-05 12:21:24.961647] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:54.249 [2024-12-05 12:21:24.978498] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:54.249 0 00:17:54.249 12:21:24 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.249 12:21:24 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:17:54.249 12:21:24 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.249 12:21:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:54.510 12:21:25 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.510 12:21:25 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:17:54.510 "subsystems": [ 00:17:54.510 { 00:17:54.510 "subsystem": "fsdev", 00:17:54.510 "config": [ 00:17:54.510 { 00:17:54.510 "method": "fsdev_set_opts", 00:17:54.510 "params": { 00:17:54.510 "fsdev_io_pool_size": 65535, 00:17:54.510 "fsdev_io_cache_size": 256 00:17:54.510 } 00:17:54.510 } 00:17:54.510 ] 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "subsystem": "keyring", 00:17:54.510 "config": [] 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "subsystem": "iobuf", 00:17:54.510 "config": [ 00:17:54.510 { 00:17:54.510 "method": "iobuf_set_options", 00:17:54.510 "params": { 00:17:54.510 "small_pool_count": 8192, 00:17:54.510 "large_pool_count": 1024, 00:17:54.510 "small_bufsize": 8192, 00:17:54.510 "large_bufsize": 135168, 00:17:54.510 "enable_numa": false 00:17:54.510 } 00:17:54.510 } 00:17:54.510 ] 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "subsystem": "sock", 00:17:54.510 "config": [ 00:17:54.510 { 00:17:54.510 "method": "sock_set_default_impl", 00:17:54.510 "params": { 00:17:54.510 "impl_name": "posix" 00:17:54.510 } 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "method": "sock_impl_set_options", 00:17:54.510 "params": { 00:17:54.510 "impl_name": "ssl", 00:17:54.510 "recv_buf_size": 4096, 00:17:54.510 "send_buf_size": 4096, 00:17:54.510 "enable_recv_pipe": true, 00:17:54.510 "enable_quickack": false, 00:17:54.510 "enable_placement_id": 0, 00:17:54.510 "enable_zerocopy_send_server": true, 00:17:54.510 "enable_zerocopy_send_client": false, 00:17:54.510 "zerocopy_threshold": 0, 00:17:54.510 "tls_version": 0, 00:17:54.510 "enable_ktls": false 00:17:54.510 } 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "method": "sock_impl_set_options", 00:17:54.510 "params": { 00:17:54.510 "impl_name": "posix", 00:17:54.510 "recv_buf_size": 2097152, 00:17:54.510 "send_buf_size": 2097152, 00:17:54.510 "enable_recv_pipe": true, 00:17:54.510 "enable_quickack": false, 00:17:54.510 "enable_placement_id": 0, 00:17:54.510 "enable_zerocopy_send_server": true, 00:17:54.510 "enable_zerocopy_send_client": false, 00:17:54.510 "zerocopy_threshold": 0, 00:17:54.510 "tls_version": 0, 00:17:54.510 "enable_ktls": false 00:17:54.510 } 00:17:54.510 } 00:17:54.510 ] 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "subsystem": "vmd", 00:17:54.510 "config": [] 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "subsystem": "accel", 00:17:54.510 "config": [ 00:17:54.510 { 00:17:54.510 "method": "accel_set_options", 00:17:54.510 "params": { 00:17:54.510 "small_cache_size": 128, 00:17:54.510 "large_cache_size": 16, 00:17:54.510 "task_count": 2048, 00:17:54.510 "sequence_count": 2048, 00:17:54.510 "buf_count": 2048 00:17:54.510 } 00:17:54.510 } 00:17:54.510 ] 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "subsystem": "bdev", 00:17:54.510 "config": [ 00:17:54.510 { 00:17:54.510 "method": "bdev_set_options", 00:17:54.510 "params": { 00:17:54.510 "bdev_io_pool_size": 65535, 00:17:54.510 "bdev_io_cache_size": 256, 00:17:54.510 "bdev_auto_examine": true, 00:17:54.510 "iobuf_small_cache_size": 128, 00:17:54.510 "iobuf_large_cache_size": 16 00:17:54.510 } 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "method": "bdev_raid_set_options", 00:17:54.510 "params": { 00:17:54.510 "process_window_size_kb": 1024, 00:17:54.510 "process_max_bandwidth_mb_sec": 0 00:17:54.510 } 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "method": "bdev_iscsi_set_options", 00:17:54.510 "params": { 00:17:54.510 "timeout_sec": 30 00:17:54.510 } 00:17:54.510 }, 00:17:54.510 { 00:17:54.510 "method": "bdev_nvme_set_options", 00:17:54.510 "params": { 00:17:54.510 "action_on_timeout": "none", 00:17:54.510 "timeout_us": 0, 00:17:54.510 "timeout_admin_us": 0, 00:17:54.510 "keep_alive_timeout_ms": 10000, 00:17:54.510 "arbitration_burst": 0, 00:17:54.510 "low_priority_weight": 0, 00:17:54.510 "medium_priority_weight": 0, 00:17:54.510 "high_priority_weight": 0, 00:17:54.510 "nvme_adminq_poll_period_us": 10000, 00:17:54.510 "nvme_ioq_poll_period_us": 0, 00:17:54.510 "io_queue_requests": 0, 00:17:54.510 "delay_cmd_submit": true, 00:17:54.510 "transport_retry_count": 4, 00:17:54.510 "bdev_retry_count": 3, 00:17:54.510 "transport_ack_timeout": 0, 00:17:54.510 "ctrlr_loss_timeout_sec": 0, 00:17:54.510 "reconnect_delay_sec": 0, 00:17:54.510 "fast_io_fail_timeout_sec": 0, 00:17:54.510 "disable_auto_failback": false, 00:17:54.510 "generate_uuids": false, 00:17:54.510 "transport_tos": 0, 00:17:54.510 "nvme_error_stat": false, 00:17:54.510 "rdma_srq_size": 0, 00:17:54.510 "io_path_stat": false, 00:17:54.510 "allow_accel_sequence": false, 00:17:54.510 "rdma_max_cq_size": 0, 00:17:54.510 "rdma_cm_event_timeout_ms": 0, 00:17:54.510 "dhchap_digests": [ 00:17:54.510 "sha256", 00:17:54.510 "sha384", 00:17:54.510 "sha512" 00:17:54.510 ], 00:17:54.511 "dhchap_dhgroups": [ 00:17:54.511 "null", 00:17:54.511 "ffdhe2048", 00:17:54.511 "ffdhe3072", 00:17:54.511 "ffdhe4096", 00:17:54.511 "ffdhe6144", 00:17:54.511 "ffdhe8192" 00:17:54.511 ] 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "bdev_nvme_set_hotplug", 00:17:54.511 "params": { 00:17:54.511 "period_us": 100000, 00:17:54.511 "enable": false 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "bdev_malloc_create", 00:17:54.511 "params": { 00:17:54.511 "name": "malloc0", 00:17:54.511 "num_blocks": 8192, 00:17:54.511 "block_size": 4096, 00:17:54.511 "physical_block_size": 4096, 00:17:54.511 "uuid": "653000b1-2f33-49c1-9646-a670f1d822aa", 00:17:54.511 "optimal_io_boundary": 0, 00:17:54.511 "md_size": 0, 00:17:54.511 "dif_type": 0, 00:17:54.511 "dif_is_head_of_md": false, 00:17:54.511 "dif_pi_format": 0 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "bdev_wait_for_examine" 00:17:54.511 } 00:17:54.511 ] 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "subsystem": "scsi", 00:17:54.511 "config": null 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "subsystem": "scheduler", 00:17:54.511 "config": [ 00:17:54.511 { 00:17:54.511 "method": "framework_set_scheduler", 00:17:54.511 "params": { 00:17:54.511 "name": "static" 00:17:54.511 } 00:17:54.511 } 00:17:54.511 ] 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "subsystem": "vhost_scsi", 00:17:54.511 "config": [] 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "subsystem": "vhost_blk", 00:17:54.511 "config": [] 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "subsystem": "ublk", 00:17:54.511 "config": [ 00:17:54.511 { 00:17:54.511 "method": "ublk_create_target", 00:17:54.511 "params": { 00:17:54.511 "cpumask": "1" 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "ublk_start_disk", 00:17:54.511 "params": { 00:17:54.511 "bdev_name": "malloc0", 00:17:54.511 "ublk_id": 0, 00:17:54.511 "num_queues": 1, 00:17:54.511 "queue_depth": 128 00:17:54.511 } 00:17:54.511 } 00:17:54.511 ] 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "subsystem": "nbd", 00:17:54.511 "config": [] 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "subsystem": "nvmf", 00:17:54.511 "config": [ 00:17:54.511 { 00:17:54.511 "method": "nvmf_set_config", 00:17:54.511 "params": { 00:17:54.511 "discovery_filter": "match_any", 00:17:54.511 "admin_cmd_passthru": { 00:17:54.511 "identify_ctrlr": false 00:17:54.511 }, 00:17:54.511 "dhchap_digests": [ 00:17:54.511 "sha256", 00:17:54.511 "sha384", 00:17:54.511 "sha512" 00:17:54.511 ], 00:17:54.511 "dhchap_dhgroups": [ 00:17:54.511 "null", 00:17:54.511 "ffdhe2048", 00:17:54.511 "ffdhe3072", 00:17:54.511 "ffdhe4096", 00:17:54.511 "ffdhe6144", 00:17:54.511 "ffdhe8192" 00:17:54.511 ] 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "nvmf_set_max_subsystems", 00:17:54.511 "params": { 00:17:54.511 "max_subsystems": 1024 00:17:54.511 } 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "method": "nvmf_set_crdt", 00:17:54.511 "params": { 00:17:54.511 "crdt1": 0, 00:17:54.511 "crdt2": 0, 00:17:54.511 "crdt3": 0 00:17:54.511 } 00:17:54.511 } 00:17:54.511 ] 00:17:54.511 }, 00:17:54.511 { 00:17:54.511 "subsystem": "iscsi", 00:17:54.511 "config": [ 00:17:54.511 { 00:17:54.511 "method": "iscsi_set_options", 00:17:54.511 "params": { 00:17:54.511 "node_base": "iqn.2016-06.io.spdk", 00:17:54.511 "max_sessions": 128, 00:17:54.511 "max_connections_per_session": 2, 00:17:54.511 "max_queue_depth": 64, 00:17:54.511 "default_time2wait": 2, 00:17:54.511 "default_time2retain": 20, 00:17:54.511 "first_burst_length": 8192, 00:17:54.511 "immediate_data": true, 00:17:54.511 "allow_duplicated_isid": false, 00:17:54.511 "error_recovery_level": 0, 00:17:54.511 "nop_timeout": 60, 00:17:54.511 "nop_in_interval": 30, 00:17:54.511 "disable_chap": false, 00:17:54.511 "require_chap": false, 00:17:54.511 "mutual_chap": false, 00:17:54.511 "chap_group": 0, 00:17:54.511 "max_large_datain_per_connection": 64, 00:17:54.511 "max_r2t_per_connection": 4, 00:17:54.511 "pdu_pool_size": 36864, 00:17:54.511 "immediate_data_pool_size": 16384, 00:17:54.511 "data_out_pool_size": 2048 00:17:54.511 } 00:17:54.511 } 00:17:54.511 ] 00:17:54.511 } 00:17:54.511 ] 00:17:54.511 }' 00:17:54.511 12:21:25 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73858 00:17:54.511 12:21:25 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73858 ']' 00:17:54.511 12:21:25 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73858 00:17:54.511 12:21:25 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:54.511 12:21:25 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.511 12:21:25 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73858 00:17:54.511 12:21:25 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.511 killing process with pid 73858 00:17:54.511 12:21:25 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.511 12:21:25 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73858' 00:17:54.511 12:21:25 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73858 00:17:54.511 12:21:25 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73858 00:17:55.897 [2024-12-05 12:21:26.461358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:55.897 [2024-12-05 12:21:26.497517] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:55.897 [2024-12-05 12:21:26.497690] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:55.897 [2024-12-05 12:21:26.507536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:55.897 [2024-12-05 12:21:26.507603] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:55.897 [2024-12-05 12:21:26.507621] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:55.897 [2024-12-05 12:21:26.507659] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:55.897 [2024-12-05 12:21:26.507827] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:57.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.815 12:21:28 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73918 00:17:57.815 12:21:28 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73918 00:17:57.815 12:21:28 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73918 ']' 00:17:57.815 12:21:28 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:17:57.815 12:21:28 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.815 12:21:28 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.815 12:21:28 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.815 12:21:28 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.815 12:21:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:57.815 12:21:28 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:17:57.815 "subsystems": [ 00:17:57.815 { 00:17:57.815 "subsystem": "fsdev", 00:17:57.815 "config": [ 00:17:57.815 { 00:17:57.815 "method": "fsdev_set_opts", 00:17:57.815 "params": { 00:17:57.815 "fsdev_io_pool_size": 65535, 00:17:57.815 "fsdev_io_cache_size": 256 00:17:57.815 } 00:17:57.815 } 00:17:57.815 ] 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "subsystem": "keyring", 00:17:57.815 "config": [] 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "subsystem": "iobuf", 00:17:57.815 "config": [ 00:17:57.815 { 00:17:57.815 "method": "iobuf_set_options", 00:17:57.815 "params": { 00:17:57.815 "small_pool_count": 8192, 00:17:57.815 "large_pool_count": 1024, 00:17:57.815 "small_bufsize": 8192, 00:17:57.815 "large_bufsize": 135168, 00:17:57.815 "enable_numa": false 00:17:57.815 } 00:17:57.815 } 00:17:57.815 ] 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "subsystem": "sock", 00:17:57.815 "config": [ 00:17:57.815 { 00:17:57.815 "method": "sock_set_default_impl", 00:17:57.815 "params": { 00:17:57.815 "impl_name": "posix" 00:17:57.815 } 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "method": "sock_impl_set_options", 00:17:57.815 "params": { 00:17:57.815 "impl_name": "ssl", 00:17:57.815 "recv_buf_size": 4096, 00:17:57.815 "send_buf_size": 4096, 00:17:57.815 "enable_recv_pipe": true, 00:17:57.815 "enable_quickack": false, 00:17:57.815 "enable_placement_id": 0, 00:17:57.815 "enable_zerocopy_send_server": true, 00:17:57.815 "enable_zerocopy_send_client": false, 00:17:57.815 "zerocopy_threshold": 0, 00:17:57.815 "tls_version": 0, 00:17:57.815 "enable_ktls": false 00:17:57.815 } 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "method": "sock_impl_set_options", 00:17:57.815 "params": { 00:17:57.815 "impl_name": "posix", 00:17:57.815 "recv_buf_size": 2097152, 00:17:57.815 "send_buf_size": 2097152, 00:17:57.815 "enable_recv_pipe": true, 00:17:57.815 "enable_quickack": false, 00:17:57.815 "enable_placement_id": 0, 00:17:57.815 "enable_zerocopy_send_server": true, 00:17:57.815 "enable_zerocopy_send_client": false, 00:17:57.815 "zerocopy_threshold": 0, 00:17:57.815 "tls_version": 0, 00:17:57.815 "enable_ktls": false 00:17:57.815 } 00:17:57.815 } 00:17:57.815 ] 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "subsystem": "vmd", 00:17:57.815 "config": [] 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "subsystem": "accel", 00:17:57.815 "config": [ 00:17:57.815 { 00:17:57.815 "method": "accel_set_options", 00:17:57.815 "params": { 00:17:57.815 "small_cache_size": 128, 00:17:57.815 "large_cache_size": 16, 00:17:57.815 "task_count": 2048, 00:17:57.815 "sequence_count": 2048, 00:17:57.815 "buf_count": 2048 00:17:57.815 } 00:17:57.815 } 00:17:57.815 ] 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "subsystem": "bdev", 00:17:57.815 "config": [ 00:17:57.815 { 00:17:57.815 "method": "bdev_set_options", 00:17:57.815 "params": { 00:17:57.815 "bdev_io_pool_size": 65535, 00:17:57.815 "bdev_io_cache_size": 256, 00:17:57.815 "bdev_auto_examine": true, 00:17:57.815 "iobuf_small_cache_size": 128, 00:17:57.815 "iobuf_large_cache_size": 16 00:17:57.815 } 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "method": "bdev_raid_set_options", 00:17:57.815 "params": { 00:17:57.815 "process_window_size_kb": 1024, 00:17:57.815 "process_max_bandwidth_mb_sec": 0 00:17:57.815 } 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "method": "bdev_iscsi_set_options", 00:17:57.815 "params": { 00:17:57.815 "timeout_sec": 30 00:17:57.815 } 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "method": "bdev_nvme_set_options", 00:17:57.815 "params": { 00:17:57.815 "action_on_timeout": "none", 00:17:57.815 "timeout_us": 0, 00:17:57.815 "timeout_admin_us": 0, 00:17:57.815 "keep_alive_timeout_ms": 10000, 00:17:57.815 "arbitration_burst": 0, 00:17:57.815 "low_priority_weight": 0, 00:17:57.815 "medium_priority_weight": 0, 00:17:57.815 "high_priority_weight": 0, 00:17:57.815 "nvme_adminq_poll_period_us": 10000, 00:17:57.815 "nvme_ioq_poll_period_us": 0, 00:17:57.815 "io_queue_requests": 0, 00:17:57.815 "delay_cmd_submit": true, 00:17:57.815 "transport_retry_count": 4, 00:17:57.815 "bdev_retry_count": 3, 00:17:57.815 "transport_ack_timeout": 0, 00:17:57.815 "ctrlr_loss_timeout_sec": 0, 00:17:57.815 "reconnect_delay_sec": 0, 00:17:57.815 "fast_io_fail_timeout_sec": 0, 00:17:57.815 "disable_auto_failback": false, 00:17:57.815 "generate_uuids": false, 00:17:57.815 "transport_tos": 0, 00:17:57.815 "nvme_error_stat": false, 00:17:57.815 "rdma_srq_size": 0, 00:17:57.815 "io_path_stat": false, 00:17:57.815 "allow_accel_sequence": false, 00:17:57.815 "rdma_max_cq_size": 0, 00:17:57.815 "rdma_cm_event_timeout_ms": 0, 00:17:57.815 "dhchap_digests": [ 00:17:57.815 "sha256", 00:17:57.815 "sha384", 00:17:57.815 "sha512" 00:17:57.815 ], 00:17:57.815 "dhchap_dhgroups": [ 00:17:57.815 "null", 00:17:57.815 "ffdhe2048", 00:17:57.815 "ffdhe3072", 00:17:57.815 "ffdhe4096", 00:17:57.815 "ffdhe6144", 00:17:57.815 "ffdhe8192" 00:17:57.815 ] 00:17:57.815 } 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "method": "bdev_nvme_set_hotplug", 00:17:57.815 "params": { 00:17:57.815 "period_us": 100000, 00:17:57.815 "enable": false 00:17:57.815 } 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "method": "bdev_malloc_create", 00:17:57.815 "params": { 00:17:57.815 "name": "malloc0", 00:17:57.815 "num_blocks": 8192, 00:17:57.815 "block_size": 4096, 00:17:57.815 "physical_block_size": 4096, 00:17:57.815 "uuid": "653000b1-2f33-49c1-9646-a670f1d822aa", 00:17:57.815 "optimal_io_boundary": 0, 00:17:57.815 "md_size": 0, 00:17:57.815 "dif_type": 0, 00:17:57.815 "dif_is_head_of_md": false, 00:17:57.815 "dif_pi_format": 0 00:17:57.815 } 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "method": "bdev_wait_for_examine" 00:17:57.815 } 00:17:57.815 ] 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "subsystem": "scsi", 00:17:57.815 "config": null 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "subsystem": "scheduler", 00:17:57.815 "config": [ 00:17:57.815 { 00:17:57.815 "method": "framework_set_scheduler", 00:17:57.815 "params": { 00:17:57.815 "name": "static" 00:17:57.815 } 00:17:57.815 } 00:17:57.815 ] 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "subsystem": "vhost_scsi", 00:17:57.815 "config": [] 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "subsystem": "vhost_blk", 00:17:57.815 "config": [] 00:17:57.815 }, 00:17:57.815 { 00:17:57.815 "subsystem": "ublk", 00:17:57.815 "config": [ 00:17:57.816 { 00:17:57.816 "method": "ublk_create_target", 00:17:57.816 "params": { 00:17:57.816 "cpumask": "1" 00:17:57.816 } 00:17:57.816 }, 00:17:57.816 { 00:17:57.816 "method": "ublk_start_disk", 00:17:57.816 "params": { 00:17:57.816 "bdev_name": "malloc0", 00:17:57.816 "ublk_id": 0, 00:17:57.816 "num_queues": 1, 00:17:57.816 "queue_depth": 128 00:17:57.816 } 00:17:57.816 } 00:17:57.816 ] 00:17:57.816 }, 00:17:57.816 { 00:17:57.816 "subsystem": "nbd", 00:17:57.816 "config": [] 00:17:57.816 }, 00:17:57.816 { 00:17:57.816 "subsystem": "nvmf", 00:17:57.816 "config": [ 00:17:57.816 { 00:17:57.816 "method": "nvmf_set_config", 00:17:57.816 "params": { 00:17:57.816 "discovery_filter": "match_any", 00:17:57.816 "admin_cmd_passthru": { 00:17:57.816 "identify_ctrlr": false 00:17:57.816 }, 00:17:57.816 "dhchap_digests": [ 00:17:57.816 "sha256", 00:17:57.816 "sha384", 00:17:57.816 "sha512" 00:17:57.816 ], 00:17:57.816 "dhchap_dhgroups": [ 00:17:57.816 "null", 00:17:57.816 "ffdhe2048", 00:17:57.816 "ffdhe3072", 00:17:57.816 "ffdhe4096", 00:17:57.816 "ffdhe6144", 00:17:57.816 "ffdhe8192" 00:17:57.816 ] 00:17:57.816 } 00:17:57.816 }, 00:17:57.816 { 00:17:57.816 "method": "nvmf_set_max_subsystems", 00:17:57.816 "params": { 00:17:57.816 "max_subsystems": 1024 00:17:57.816 } 00:17:57.816 }, 00:17:57.816 { 00:17:57.816 "method": "nvmf_set_crdt", 00:17:57.816 "params": { 00:17:57.816 "crdt1": 0, 00:17:57.816 "crdt2": 0, 00:17:57.816 "crdt3": 0 00:17:57.816 } 00:17:57.816 } 00:17:57.816 ] 00:17:57.816 }, 00:17:57.816 { 00:17:57.816 "subsystem": "iscsi", 00:17:57.816 "config": [ 00:17:57.816 { 00:17:57.816 "method": "iscsi_set_options", 00:17:57.816 "params": { 00:17:57.816 "node_base": "iqn.2016-06.io.spdk", 00:17:57.816 "max_sessions": 128, 00:17:57.816 "max_connections_per_session": 2, 00:17:57.816 "max_queue_depth": 64, 00:17:57.816 "default_time2wait": 2, 00:17:57.816 "default_time2retain": 20, 00:17:57.816 "first_burst_length": 8192, 00:17:57.816 "immediate_data": true, 00:17:57.816 "allow_duplicated_isid": false, 00:17:57.816 "error_recovery_level": 0, 00:17:57.816 "nop_timeout": 60, 00:17:57.816 "nop_in_interval": 30, 00:17:57.816 "disable_chap": false, 00:17:57.816 "require_chap": false, 00:17:57.816 "mutual_chap": false, 00:17:57.816 "chap_group": 0, 00:17:57.816 "max_large_datain_per_connection": 64, 00:17:57.816 "max_r2t_per_connection": 4, 00:17:57.816 "pdu_pool_size": 36864, 00:17:57.816 "immediate_data_pool_size": 16384, 00:17:57.816 "data_out_pool_size": 2048 00:17:57.816 } 00:17:57.816 } 00:17:57.816 ] 00:17:57.816 } 00:17:57.816 ] 00:17:57.816 }' 00:17:57.816 [2024-12-05 12:21:28.296415] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:57.816 [2024-12-05 12:21:28.296560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73918 ] 00:17:57.816 [2024-12-05 12:21:28.455820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.816 [2024-12-05 12:21:28.595519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.762 [2024-12-05 12:21:29.572487] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:58.762 [2024-12-05 12:21:29.573478] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:58.762 [2024-12-05 12:21:29.580637] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:58.762 [2024-12-05 12:21:29.580737] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:58.762 [2024-12-05 12:21:29.580749] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:58.762 [2024-12-05 12:21:29.580759] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:58.762 [2024-12-05 12:21:29.588642] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:58.762 [2024-12-05 12:21:29.588675] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:58.762 [2024-12-05 12:21:29.596502] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:58.762 [2024-12-05 12:21:29.596639] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:58.762 [2024-12-05 12:21:29.613506] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73918 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73918 ']' 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73918 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73918 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73918' 00:17:59.031 killing process with pid 73918 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73918 00:17:59.031 12:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73918 00:18:00.489 [2024-12-05 12:21:30.965275] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:00.489 [2024-12-05 12:21:30.998563] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:00.489 [2024-12-05 12:21:30.998663] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:00.489 [2024-12-05 12:21:31.005488] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:00.489 [2024-12-05 12:21:31.005528] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:00.489 [2024-12-05 12:21:31.005535] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:00.489 [2024-12-05 12:21:31.005558] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:00.489 [2024-12-05 12:21:31.005674] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:01.425 12:21:32 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:18:01.425 00:18:01.425 real 0m8.478s 00:18:01.425 user 0m5.693s 00:18:01.425 sys 0m3.430s 00:18:01.425 12:21:32 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.426 ************************************ 00:18:01.426 END TEST test_save_ublk_config 00:18:01.426 ************************************ 00:18:01.426 12:21:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:01.426 12:21:32 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73993 00:18:01.426 12:21:32 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:01.426 12:21:32 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73993 00:18:01.426 12:21:32 ublk -- common/autotest_common.sh@835 -- # '[' -z 73993 ']' 00:18:01.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.426 12:21:32 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.426 12:21:32 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.426 12:21:32 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.426 12:21:32 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.426 12:21:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:01.426 12:21:32 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:01.685 [2024-12-05 12:21:32.349888] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:18:01.685 [2024-12-05 12:21:32.350006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73993 ] 00:18:01.685 [2024-12-05 12:21:32.506513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:01.944 [2024-12-05 12:21:32.601751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.944 [2024-12-05 12:21:32.601755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.512 12:21:33 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.512 12:21:33 ublk -- common/autotest_common.sh@868 -- # return 0 00:18:02.512 12:21:33 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:02.512 12:21:33 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:02.512 12:21:33 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.512 12:21:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:02.512 ************************************ 00:18:02.512 START TEST test_create_ublk 00:18:02.512 ************************************ 00:18:02.512 12:21:33 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:18:02.512 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:02.512 12:21:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.512 12:21:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:02.512 [2024-12-05 12:21:33.195481] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:02.512 [2024-12-05 12:21:33.197183] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:02.512 12:21:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.512 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:02.512 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:02.512 12:21:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.512 12:21:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:02.512 12:21:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.512 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:02.512 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:02.512 12:21:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.512 12:21:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:02.512 [2024-12-05 12:21:33.371627] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:02.512 [2024-12-05 12:21:33.371946] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:02.512 [2024-12-05 12:21:33.371960] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:02.512 [2024-12-05 12:21:33.371966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:02.771 [2024-12-05 12:21:33.381486] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:02.771 [2024-12-05 12:21:33.381506] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:02.771 [2024-12-05 12:21:33.392481] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:02.771 [2024-12-05 12:21:33.393002] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:02.771 [2024-12-05 12:21:33.405484] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:02.771 12:21:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:02.771 12:21:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.771 12:21:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:02.771 12:21:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:02.771 { 00:18:02.771 "ublk_device": "/dev/ublkb0", 00:18:02.771 "id": 0, 00:18:02.771 "queue_depth": 512, 00:18:02.771 "num_queues": 4, 00:18:02.771 "bdev_name": "Malloc0" 00:18:02.771 } 00:18:02.771 ]' 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:02.771 12:21:33 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:02.771 12:21:33 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:02.771 12:21:33 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:02.771 12:21:33 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:02.771 12:21:33 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:02.771 12:21:33 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:02.771 12:21:33 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:02.771 12:21:33 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:02.771 12:21:33 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:02.771 12:21:33 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:02.771 12:21:33 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:02.771 12:21:33 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:03.029 fio: verification read phase will never start because write phase uses all of runtime 00:18:03.030 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:03.030 fio-3.35 00:18:03.030 Starting 1 process 00:18:13.006 00:18:13.006 fio_test: (groupid=0, jobs=1): err= 0: pid=74036: Thu Dec 5 12:21:43 2024 00:18:13.006 write: IOPS=18.4k, BW=71.7MiB/s (75.2MB/s)(717MiB/10001msec); 0 zone resets 00:18:13.006 clat (usec): min=37, max=3996, avg=53.71, stdev=91.64 00:18:13.006 lat (usec): min=37, max=3996, avg=54.13, stdev=91.66 00:18:13.006 clat percentiles (usec): 00:18:13.006 | 1.00th=[ 41], 5.00th=[ 43], 10.00th=[ 45], 20.00th=[ 46], 00:18:13.006 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 51], 00:18:13.006 | 70.00th=[ 52], 80.00th=[ 53], 90.00th=[ 56], 95.00th=[ 60], 00:18:13.006 | 99.00th=[ 73], 99.50th=[ 121], 99.90th=[ 1795], 99.95th=[ 2671], 00:18:13.006 | 99.99th=[ 3458] 00:18:13.006 bw ( KiB/s): min=69848, max=79560, per=100.00%, avg=73536.84, stdev=2421.32, samples=19 00:18:13.006 iops : min=17462, max=19890, avg=18384.21, stdev=605.33, samples=19 00:18:13.006 lat (usec) : 50=57.79%, 100=41.69%, 250=0.31%, 500=0.04%, 750=0.01% 00:18:13.006 lat (usec) : 1000=0.01% 00:18:13.006 lat (msec) : 2=0.06%, 4=0.09% 00:18:13.006 cpu : usr=2.74%, sys=16.13%, ctx=183661, majf=0, minf=794 00:18:13.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:13.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.006 issued rwts: total=0,183622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:13.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:13.006 00:18:13.006 Run status group 0 (all jobs): 00:18:13.006 WRITE: bw=71.7MiB/s (75.2MB/s), 71.7MiB/s-71.7MiB/s (75.2MB/s-75.2MB/s), io=717MiB (752MB), run=10001-10001msec 00:18:13.006 00:18:13.006 Disk stats (read/write): 00:18:13.006 ublkb0: ios=0/181755, merge=0/0, ticks=0/7794, in_queue=7794, util=99.09% 00:18:13.006 12:21:43 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:13.006 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.006 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.006 [2024-12-05 12:21:43.812944] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:13.006 [2024-12-05 12:21:43.856514] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:13.006 [2024-12-05 12:21:43.857138] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:13.006 [2024-12-05 12:21:43.860781] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:13.006 [2024-12-05 12:21:43.861028] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:13.006 [2024-12-05 12:21:43.861042] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:13.006 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.006 12:21:43 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:13.006 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:18:13.006 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:13.006 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:13.006 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.006 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:13.265 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.265 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:18:13.265 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.265 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.265 [2024-12-05 12:21:43.879538] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:13.265 request: 00:18:13.265 { 00:18:13.265 "ublk_id": 0, 00:18:13.265 "method": "ublk_stop_disk", 00:18:13.265 "req_id": 1 00:18:13.265 } 00:18:13.265 Got JSON-RPC error response 00:18:13.265 response: 00:18:13.265 { 00:18:13.265 "code": -19, 00:18:13.265 "message": "No such device" 00:18:13.265 } 00:18:13.265 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:13.265 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:18:13.265 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.265 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.265 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.265 12:21:43 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:13.265 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.265 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.265 [2024-12-05 12:21:43.895542] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:13.265 [2024-12-05 12:21:43.899418] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:13.265 [2024-12-05 12:21:43.899454] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:13.265 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.265 12:21:43 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:13.265 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.265 12:21:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.523 12:21:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.523 12:21:44 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:13.523 12:21:44 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:13.523 12:21:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.523 12:21:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.523 12:21:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.523 12:21:44 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:13.523 12:21:44 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:13.523 12:21:44 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:13.523 12:21:44 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:13.523 12:21:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.523 12:21:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.523 12:21:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.523 12:21:44 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:13.523 12:21:44 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:13.523 12:21:44 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:13.523 00:18:13.523 real 0m11.186s 00:18:13.523 user 0m0.565s 00:18:13.523 sys 0m1.695s 00:18:13.523 ************************************ 00:18:13.523 END TEST test_create_ublk 00:18:13.523 ************************************ 00:18:13.523 12:21:44 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.523 12:21:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.782 12:21:44 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:13.782 12:21:44 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:13.782 12:21:44 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.782 12:21:44 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.782 ************************************ 00:18:13.782 START TEST test_create_multi_ublk 00:18:13.782 ************************************ 00:18:13.782 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:18:13.782 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:13.782 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.782 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.782 [2024-12-05 12:21:44.435479] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:13.782 [2024-12-05 12:21:44.437166] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:13.782 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.782 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:13.782 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:13.782 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:13.782 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:13.782 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.782 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.041 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.041 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:14.041 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:14.041 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.041 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.041 [2024-12-05 12:21:44.675597] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:14.041 [2024-12-05 12:21:44.675921] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:14.041 [2024-12-05 12:21:44.675933] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:14.041 [2024-12-05 12:21:44.675943] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:14.041 [2024-12-05 12:21:44.695485] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:14.041 [2024-12-05 12:21:44.695508] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:14.041 [2024-12-05 12:21:44.707479] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:14.041 [2024-12-05 12:21:44.708012] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:14.041 [2024-12-05 12:21:44.714988] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:14.041 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.041 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:14.041 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:14.041 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:14.041 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.041 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.300 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.300 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:14.300 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:14.300 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.300 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.300 [2024-12-05 12:21:44.935593] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:14.300 [2024-12-05 12:21:44.935902] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:14.300 [2024-12-05 12:21:44.935916] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:14.300 [2024-12-05 12:21:44.935921] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:14.300 [2024-12-05 12:21:44.943495] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:14.300 [2024-12-05 12:21:44.943511] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:14.300 [2024-12-05 12:21:44.951492] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:14.300 [2024-12-05 12:21:44.952024] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:14.300 [2024-12-05 12:21:44.960479] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:14.300 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.300 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:14.300 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:14.300 12:21:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:14.300 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.300 12:21:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.300 12:21:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.300 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:14.300 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:14.300 12:21:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.300 12:21:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.300 [2024-12-05 12:21:45.148579] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:14.300 [2024-12-05 12:21:45.148899] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:14.300 [2024-12-05 12:21:45.148911] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:14.300 [2024-12-05 12:21:45.148918] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:14.300 [2024-12-05 12:21:45.152937] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:14.300 [2024-12-05 12:21:45.152958] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:14.300 [2024-12-05 12:21:45.163485] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:14.300 [2024-12-05 12:21:45.164012] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:14.558 [2024-12-05 12:21:45.176479] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.558 [2024-12-05 12:21:45.351598] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:14.558 [2024-12-05 12:21:45.351911] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:14.558 [2024-12-05 12:21:45.351925] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:14.558 [2024-12-05 12:21:45.351930] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:14.558 [2024-12-05 12:21:45.359509] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:14.558 [2024-12-05 12:21:45.359527] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:14.558 [2024-12-05 12:21:45.367481] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:14.558 [2024-12-05 12:21:45.368004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:14.558 [2024-12-05 12:21:45.375556] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.558 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:14.558 { 00:18:14.558 "ublk_device": "/dev/ublkb0", 00:18:14.558 "id": 0, 00:18:14.558 "queue_depth": 512, 00:18:14.558 "num_queues": 4, 00:18:14.558 "bdev_name": "Malloc0" 00:18:14.558 }, 00:18:14.558 { 00:18:14.559 "ublk_device": "/dev/ublkb1", 00:18:14.559 "id": 1, 00:18:14.559 "queue_depth": 512, 00:18:14.559 "num_queues": 4, 00:18:14.559 "bdev_name": "Malloc1" 00:18:14.559 }, 00:18:14.559 { 00:18:14.559 "ublk_device": "/dev/ublkb2", 00:18:14.559 "id": 2, 00:18:14.559 "queue_depth": 512, 00:18:14.559 "num_queues": 4, 00:18:14.559 "bdev_name": "Malloc2" 00:18:14.559 }, 00:18:14.559 { 00:18:14.559 "ublk_device": "/dev/ublkb3", 00:18:14.559 "id": 3, 00:18:14.559 "queue_depth": 512, 00:18:14.559 "num_queues": 4, 00:18:14.559 "bdev_name": "Malloc3" 00:18:14.559 } 00:18:14.559 ]' 00:18:14.559 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:14.559 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:14.559 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:14.817 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:15.076 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:15.335 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:15.335 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:15.335 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:15.335 12:21:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:15.335 [2024-12-05 12:21:46.015561] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:15.335 [2024-12-05 12:21:46.061969] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:15.335 [2024-12-05 12:21:46.062897] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:15.335 [2024-12-05 12:21:46.071489] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:15.335 [2024-12-05 12:21:46.071713] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:15.335 [2024-12-05 12:21:46.071727] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:15.335 [2024-12-05 12:21:46.087529] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:15.335 [2024-12-05 12:21:46.120034] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:15.335 [2024-12-05 12:21:46.120863] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:15.335 [2024-12-05 12:21:46.132522] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:15.335 [2024-12-05 12:21:46.132746] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:15.335 [2024-12-05 12:21:46.132759] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:15.335 [2024-12-05 12:21:46.147564] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:15.335 [2024-12-05 12:21:46.186506] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:15.335 [2024-12-05 12:21:46.187114] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:15.335 [2024-12-05 12:21:46.191484] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:15.335 [2024-12-05 12:21:46.191712] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:15.335 [2024-12-05 12:21:46.191726] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.335 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:15.335 [2024-12-05 12:21:46.199549] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:15.594 [2024-12-05 12:21:46.237972] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:15.594 [2024-12-05 12:21:46.238719] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:15.594 [2024-12-05 12:21:46.243486] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:15.594 [2024-12-05 12:21:46.243698] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:15.594 [2024-12-05 12:21:46.243706] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:15.594 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.594 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:15.594 [2024-12-05 12:21:46.443527] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:15.594 [2024-12-05 12:21:46.447265] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:15.594 [2024-12-05 12:21:46.447296] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:15.853 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:15.853 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:15.853 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:15.853 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.853 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.112 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.112 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:16.112 12:21:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:16.112 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.112 12:21:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.371 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.371 12:21:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:16.371 12:21:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:16.371 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.371 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.937 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.937 12:21:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:16.937 12:21:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:16.937 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.937 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:17.197 00:18:17.197 real 0m3.479s 00:18:17.197 user 0m0.802s 00:18:17.197 sys 0m0.139s 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.197 12:21:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:17.197 ************************************ 00:18:17.197 END TEST test_create_multi_ublk 00:18:17.197 ************************************ 00:18:17.197 12:21:47 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:17.197 12:21:47 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:17.197 12:21:47 ublk -- ublk/ublk.sh@130 -- # killprocess 73993 00:18:17.197 12:21:47 ublk -- common/autotest_common.sh@954 -- # '[' -z 73993 ']' 00:18:17.197 12:21:47 ublk -- common/autotest_common.sh@958 -- # kill -0 73993 00:18:17.197 12:21:47 ublk -- common/autotest_common.sh@959 -- # uname 00:18:17.197 12:21:47 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.197 12:21:47 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73993 00:18:17.197 12:21:47 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.197 12:21:47 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.197 12:21:47 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73993' 00:18:17.197 killing process with pid 73993 00:18:17.197 12:21:47 ublk -- common/autotest_common.sh@973 -- # kill 73993 00:18:17.197 12:21:47 ublk -- common/autotest_common.sh@978 -- # wait 73993 00:18:17.763 [2024-12-05 12:21:48.542957] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:17.764 [2024-12-05 12:21:48.543010] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:18.699 00:18:18.699 real 0m25.697s 00:18:18.699 user 0m35.432s 00:18:18.699 sys 0m10.758s 00:18:18.699 12:21:49 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.699 ************************************ 00:18:18.699 END TEST ublk 00:18:18.699 12:21:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.699 ************************************ 00:18:18.699 12:21:49 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:18.699 12:21:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:18.699 12:21:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.699 12:21:49 -- common/autotest_common.sh@10 -- # set +x 00:18:18.699 ************************************ 00:18:18.699 START TEST ublk_recovery 00:18:18.699 ************************************ 00:18:18.699 12:21:49 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:18.699 * Looking for test storage... 00:18:18.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:18.699 12:21:49 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:18.699 12:21:49 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:18:18.699 12:21:49 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:18.699 12:21:49 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.699 12:21:49 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:18:18.699 12:21:49 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.699 12:21:49 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:18.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.699 --rc genhtml_branch_coverage=1 00:18:18.699 --rc genhtml_function_coverage=1 00:18:18.699 --rc genhtml_legend=1 00:18:18.699 --rc geninfo_all_blocks=1 00:18:18.699 --rc geninfo_unexecuted_blocks=1 00:18:18.699 00:18:18.699 ' 00:18:18.699 12:21:49 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:18.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.699 --rc genhtml_branch_coverage=1 00:18:18.699 --rc genhtml_function_coverage=1 00:18:18.699 --rc genhtml_legend=1 00:18:18.699 --rc geninfo_all_blocks=1 00:18:18.699 --rc geninfo_unexecuted_blocks=1 00:18:18.699 00:18:18.699 ' 00:18:18.699 12:21:49 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:18.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.699 --rc genhtml_branch_coverage=1 00:18:18.699 --rc genhtml_function_coverage=1 00:18:18.699 --rc genhtml_legend=1 00:18:18.699 --rc geninfo_all_blocks=1 00:18:18.699 --rc geninfo_unexecuted_blocks=1 00:18:18.699 00:18:18.699 ' 00:18:18.699 12:21:49 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:18.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.699 --rc genhtml_branch_coverage=1 00:18:18.699 --rc genhtml_function_coverage=1 00:18:18.699 --rc genhtml_legend=1 00:18:18.699 --rc geninfo_all_blocks=1 00:18:18.699 --rc geninfo_unexecuted_blocks=1 00:18:18.699 00:18:18.699 ' 00:18:18.699 12:21:49 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:18.699 12:21:49 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:18.699 12:21:49 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:18.699 12:21:49 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:18.700 12:21:49 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:18.700 12:21:49 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:18.700 12:21:49 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:18.700 12:21:49 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:18.700 12:21:49 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:18.700 12:21:49 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:18.700 12:21:49 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74394 00:18:18.700 12:21:49 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.700 12:21:49 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74394 00:18:18.700 12:21:49 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74394 ']' 00:18:18.700 12:21:49 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.700 12:21:49 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.700 12:21:49 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.700 12:21:49 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:18.700 12:21:49 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.700 12:21:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 [2024-12-05 12:21:49.533489] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:18:18.700 [2024-12-05 12:21:49.533607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74394 ] 00:18:18.958 [2024-12-05 12:21:49.688448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:18.958 [2024-12-05 12:21:49.778772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.958 [2024-12-05 12:21:49.778836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.524 12:21:50 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.524 12:21:50 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:19.524 12:21:50 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:19.524 12:21:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.524 12:21:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.524 [2024-12-05 12:21:50.322206] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:19.524 [2024-12-05 12:21:50.323970] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:19.525 12:21:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.525 12:21:50 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:19.525 12:21:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.525 12:21:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.783 malloc0 00:18:19.783 12:21:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.783 12:21:50 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:19.783 12:21:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.783 12:21:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.783 [2024-12-05 12:21:50.418590] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:19.783 [2024-12-05 12:21:50.418677] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:19.783 [2024-12-05 12:21:50.418686] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:19.783 [2024-12-05 12:21:50.418692] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:19.783 [2024-12-05 12:21:50.422838] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:19.783 [2024-12-05 12:21:50.422851] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:19.783 [2024-12-05 12:21:50.424012] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:19.783 [2024-12-05 12:21:50.424144] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:19.783 [2024-12-05 12:21:50.429679] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:19.783 1 00:18:19.783 12:21:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.783 12:21:50 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:20.718 12:21:51 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74429 00:18:20.718 12:21:51 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:20.718 12:21:51 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:20.718 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:20.718 fio-3.35 00:18:20.718 Starting 1 process 00:18:25.988 12:21:56 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74394 00:18:25.988 12:21:56 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:31.277 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74394 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:31.277 12:22:01 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74540 00:18:31.277 12:22:01 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.277 12:22:01 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:31.277 12:22:01 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74540 00:18:31.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.277 12:22:01 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74540 ']' 00:18:31.277 12:22:01 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.277 12:22:01 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.277 12:22:01 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.277 12:22:01 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.277 12:22:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.277 [2024-12-05 12:22:01.525593] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:18:31.277 [2024-12-05 12:22:01.525713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74540 ] 00:18:31.277 [2024-12-05 12:22:01.676346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:31.277 [2024-12-05 12:22:01.766869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.277 [2024-12-05 12:22:01.766999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.536 12:22:02 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.536 12:22:02 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:31.536 12:22:02 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:31.536 12:22:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.536 12:22:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.536 [2024-12-05 12:22:02.319481] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:31.536 [2024-12-05 12:22:02.321181] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:31.536 12:22:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.536 12:22:02 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:31.536 12:22:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.536 12:22:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.795 malloc0 00:18:31.795 12:22:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.795 12:22:02 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:31.795 12:22:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.795 12:22:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.795 [2024-12-05 12:22:02.407935] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:31.795 [2024-12-05 12:22:02.407969] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:31.795 [2024-12-05 12:22:02.407978] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:31.795 [2024-12-05 12:22:02.415507] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:31.795 [2024-12-05 12:22:02.415529] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:18:31.795 [2024-12-05 12:22:02.415536] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:31.795 [2024-12-05 12:22:02.415605] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:31.795 1 00:18:31.795 12:22:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.795 12:22:02 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74429 00:18:31.795 [2024-12-05 12:22:02.423481] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:31.795 [2024-12-05 12:22:02.426290] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:31.795 [2024-12-05 12:22:02.430651] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:31.795 [2024-12-05 12:22:02.430670] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:28.053 00:19:28.053 fio_test: (groupid=0, jobs=1): err= 0: pid=74433: Thu Dec 5 12:22:51 2024 00:19:28.053 read: IOPS=27.4k, BW=107MiB/s (112MB/s)(6425MiB/60002msec) 00:19:28.053 slat (nsec): min=1085, max=304858, avg=4881.35, stdev=1450.98 00:19:28.053 clat (usec): min=698, max=5999.5k, avg=2313.52, stdev=39121.85 00:19:28.053 lat (usec): min=710, max=5999.5k, avg=2318.40, stdev=39121.85 00:19:28.053 clat percentiles (usec): 00:19:28.053 | 1.00th=[ 1729], 5.00th=[ 1844], 10.00th=[ 1876], 20.00th=[ 1909], 00:19:28.053 | 30.00th=[ 1926], 40.00th=[ 1942], 50.00th=[ 1958], 60.00th=[ 1975], 00:19:28.053 | 70.00th=[ 1991], 80.00th=[ 2008], 90.00th=[ 2057], 95.00th=[ 2737], 00:19:28.053 | 99.00th=[ 4686], 99.50th=[ 5014], 99.90th=[ 6063], 99.95th=[ 6980], 00:19:28.053 | 99.99th=[12911] 00:19:28.053 bw ( KiB/s): min=26616, max=129088, per=100.00%, avg=120770.07, stdev=12199.39, samples=108 00:19:28.053 iops : min= 6654, max=32272, avg=30192.52, stdev=3049.85, samples=108 00:19:28.053 write: IOPS=27.4k, BW=107MiB/s (112MB/s)(6419MiB/60002msec); 0 zone resets 00:19:28.053 slat (nsec): min=1134, max=260914, avg=4912.19, stdev=1489.72 00:19:28.053 clat (usec): min=626, max=5999.5k, avg=2347.15, stdev=35627.55 00:19:28.053 lat (usec): min=630, max=5999.6k, avg=2352.06, stdev=35627.55 00:19:28.053 clat percentiles (usec): 00:19:28.053 | 1.00th=[ 1762], 5.00th=[ 1926], 10.00th=[ 1958], 20.00th=[ 1991], 00:19:28.053 | 30.00th=[ 2024], 40.00th=[ 2040], 50.00th=[ 2057], 60.00th=[ 2073], 00:19:28.053 | 70.00th=[ 2073], 80.00th=[ 2114], 90.00th=[ 2147], 95.00th=[ 2671], 00:19:28.053 | 99.00th=[ 4686], 99.50th=[ 5080], 99.90th=[ 6128], 99.95th=[ 7111], 00:19:28.053 | 99.99th=[13042] 00:19:28.054 bw ( KiB/s): min=26104, max=129952, per=100.00%, avg=120658.52, stdev=12344.84, samples=108 00:19:28.054 iops : min= 6526, max=32488, avg=30164.63, stdev=3086.21, samples=108 00:19:28.054 lat (usec) : 750=0.01%, 1000=0.01% 00:19:28.054 lat (msec) : 2=48.42%, 4=49.40%, 10=2.15%, 20=0.02%, >=2000=0.01% 00:19:28.054 cpu : usr=6.04%, sys=27.16%, ctx=109074, majf=0, minf=14 00:19:28.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:28.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.054 issued rwts: total=1644812,1643332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.054 00:19:28.054 Run status group 0 (all jobs): 00:19:28.054 READ: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=6425MiB (6737MB), run=60002-60002msec 00:19:28.054 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=6419MiB (6731MB), run=60002-60002msec 00:19:28.054 00:19:28.054 Disk stats (read/write): 00:19:28.054 ublkb1: ios=1641409/1640041, merge=0/0, ticks=3719688/3637907, in_queue=7357595, util=99.89% 00:19:28.054 12:22:51 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:28.054 [2024-12-05 12:22:51.688219] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:28.054 [2024-12-05 12:22:51.723589] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:28.054 [2024-12-05 12:22:51.723730] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:28.054 [2024-12-05 12:22:51.733495] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:28.054 [2024-12-05 12:22:51.733592] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:28.054 [2024-12-05 12:22:51.733601] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.054 12:22:51 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:28.054 [2024-12-05 12:22:51.747584] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:28.054 [2024-12-05 12:22:51.757479] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:28.054 [2024-12-05 12:22:51.757509] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.054 12:22:51 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:28.054 12:22:51 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:28.054 12:22:51 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74540 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74540 ']' 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74540 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74540 00:19:28.054 killing process with pid 74540 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74540' 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74540 00:19:28.054 12:22:51 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74540 00:19:28.054 [2024-12-05 12:22:52.842253] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:28.054 [2024-12-05 12:22:52.842299] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:28.054 ************************************ 00:19:28.054 END TEST ublk_recovery 00:19:28.054 ************************************ 00:19:28.054 00:19:28.054 real 1m4.297s 00:19:28.054 user 1m47.616s 00:19:28.054 sys 0m30.030s 00:19:28.054 12:22:53 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.054 12:22:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:28.054 12:22:53 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:19:28.054 12:22:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:28.054 12:22:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:28.054 12:22:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.054 12:22:53 -- common/autotest_common.sh@10 -- # set +x 00:19:28.054 12:22:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:28.054 12:22:53 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:28.054 12:22:53 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:28.054 12:22:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:28.054 12:22:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:28.054 12:22:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:28.054 12:22:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:28.054 12:22:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:28.054 12:22:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:28.054 12:22:53 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:19:28.054 12:22:53 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:28.054 12:22:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:28.054 12:22:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.054 12:22:53 -- common/autotest_common.sh@10 -- # set +x 00:19:28.054 ************************************ 00:19:28.054 START TEST ftl 00:19:28.054 ************************************ 00:19:28.054 12:22:53 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:28.054 * Looking for test storage... 00:19:28.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:28.054 12:22:53 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:28.054 12:22:53 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:19:28.054 12:22:53 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:28.054 12:22:53 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:28.054 12:22:53 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.054 12:22:53 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.054 12:22:53 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.054 12:22:53 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.054 12:22:53 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.054 12:22:53 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.054 12:22:53 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.054 12:22:53 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.054 12:22:53 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.054 12:22:53 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.054 12:22:53 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.054 12:22:53 ftl -- scripts/common.sh@344 -- # case "$op" in 00:19:28.054 12:22:53 ftl -- scripts/common.sh@345 -- # : 1 00:19:28.054 12:22:53 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.054 12:22:53 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.054 12:22:53 ftl -- scripts/common.sh@365 -- # decimal 1 00:19:28.054 12:22:53 ftl -- scripts/common.sh@353 -- # local d=1 00:19:28.054 12:22:53 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.054 12:22:53 ftl -- scripts/common.sh@355 -- # echo 1 00:19:28.054 12:22:53 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.054 12:22:53 ftl -- scripts/common.sh@366 -- # decimal 2 00:19:28.054 12:22:53 ftl -- scripts/common.sh@353 -- # local d=2 00:19:28.054 12:22:53 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.054 12:22:53 ftl -- scripts/common.sh@355 -- # echo 2 00:19:28.054 12:22:53 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.055 12:22:53 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.055 12:22:53 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.055 12:22:53 ftl -- scripts/common.sh@368 -- # return 0 00:19:28.055 12:22:53 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.055 12:22:53 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:28.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.055 --rc genhtml_branch_coverage=1 00:19:28.055 --rc genhtml_function_coverage=1 00:19:28.055 --rc genhtml_legend=1 00:19:28.055 --rc geninfo_all_blocks=1 00:19:28.055 --rc geninfo_unexecuted_blocks=1 00:19:28.055 00:19:28.055 ' 00:19:28.055 12:22:53 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:28.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.055 --rc genhtml_branch_coverage=1 00:19:28.055 --rc genhtml_function_coverage=1 00:19:28.055 --rc genhtml_legend=1 00:19:28.055 --rc geninfo_all_blocks=1 00:19:28.055 --rc geninfo_unexecuted_blocks=1 00:19:28.055 00:19:28.055 ' 00:19:28.055 12:22:53 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:28.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.055 --rc genhtml_branch_coverage=1 00:19:28.055 --rc genhtml_function_coverage=1 00:19:28.055 --rc genhtml_legend=1 00:19:28.055 --rc geninfo_all_blocks=1 00:19:28.055 --rc geninfo_unexecuted_blocks=1 00:19:28.055 00:19:28.055 ' 00:19:28.055 12:22:53 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:28.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.055 --rc genhtml_branch_coverage=1 00:19:28.055 --rc genhtml_function_coverage=1 00:19:28.055 --rc genhtml_legend=1 00:19:28.055 --rc geninfo_all_blocks=1 00:19:28.055 --rc geninfo_unexecuted_blocks=1 00:19:28.055 00:19:28.055 ' 00:19:28.055 12:22:53 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:28.055 12:22:53 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:28.055 12:22:53 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:28.055 12:22:53 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:28.055 12:22:53 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:28.055 12:22:53 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:28.055 12:22:53 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:28.055 12:22:53 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:28.055 12:22:53 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:28.055 12:22:53 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:28.055 12:22:53 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:28.055 12:22:53 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:28.055 12:22:53 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:28.055 12:22:53 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:28.055 12:22:53 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:28.055 12:22:53 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:28.055 12:22:53 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:28.055 12:22:53 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:28.055 12:22:53 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:28.055 12:22:53 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:28.055 12:22:53 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:28.055 12:22:53 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:28.055 12:22:53 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:28.055 12:22:53 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:28.055 12:22:53 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:28.055 12:22:53 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:28.055 12:22:53 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:28.055 12:22:53 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.055 12:22:53 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.055 12:22:53 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:28.055 12:22:53 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:28.055 12:22:53 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:28.055 12:22:53 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:28.055 12:22:53 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:28.055 12:22:53 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:28.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:28.055 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:28.055 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:28.055 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:28.055 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:28.055 12:22:54 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75335 00:19:28.055 12:22:54 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:28.055 12:22:54 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75335 00:19:28.055 12:22:54 ftl -- common/autotest_common.sh@835 -- # '[' -z 75335 ']' 00:19:28.055 12:22:54 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.055 12:22:54 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.055 12:22:54 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.055 12:22:54 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.055 12:22:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:28.055 [2024-12-05 12:22:54.432798] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:19:28.055 [2024-12-05 12:22:54.433064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75335 ] 00:19:28.055 [2024-12-05 12:22:54.586641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.055 [2024-12-05 12:22:54.679531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.055 12:22:55 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.055 12:22:55 ftl -- common/autotest_common.sh@868 -- # return 0 00:19:28.055 12:22:55 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:28.055 12:22:55 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:28.055 12:22:56 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:28.055 12:22:56 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:28.055 12:22:56 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:28.055 12:22:56 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:28.055 12:22:56 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:28.055 12:22:56 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:28.055 12:22:56 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:28.055 12:22:56 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:28.055 12:22:56 ftl -- ftl/ftl.sh@50 -- # break 00:19:28.056 12:22:56 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:28.056 12:22:56 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:28.056 12:22:56 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:28.056 12:22:56 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:28.056 12:22:56 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:28.056 12:22:56 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:28.056 12:22:56 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:28.056 12:22:56 ftl -- ftl/ftl.sh@63 -- # break 00:19:28.056 12:22:56 ftl -- ftl/ftl.sh@66 -- # killprocess 75335 00:19:28.056 12:22:56 ftl -- common/autotest_common.sh@954 -- # '[' -z 75335 ']' 00:19:28.056 12:22:56 ftl -- common/autotest_common.sh@958 -- # kill -0 75335 00:19:28.056 12:22:56 ftl -- common/autotest_common.sh@959 -- # uname 00:19:28.056 12:22:56 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.056 12:22:56 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75335 00:19:28.056 killing process with pid 75335 00:19:28.056 12:22:57 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.056 12:22:57 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.056 12:22:57 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75335' 00:19:28.056 12:22:57 ftl -- common/autotest_common.sh@973 -- # kill 75335 00:19:28.056 12:22:57 ftl -- common/autotest_common.sh@978 -- # wait 75335 00:19:28.056 12:22:58 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:28.056 12:22:58 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:28.056 12:22:58 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:28.056 12:22:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.056 12:22:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:28.056 ************************************ 00:19:28.056 START TEST ftl_fio_basic 00:19:28.056 ************************************ 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:28.056 * Looking for test storage... 00:19:28.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:28.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.056 --rc genhtml_branch_coverage=1 00:19:28.056 --rc genhtml_function_coverage=1 00:19:28.056 --rc genhtml_legend=1 00:19:28.056 --rc geninfo_all_blocks=1 00:19:28.056 --rc geninfo_unexecuted_blocks=1 00:19:28.056 00:19:28.056 ' 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:28.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.056 --rc genhtml_branch_coverage=1 00:19:28.056 --rc genhtml_function_coverage=1 00:19:28.056 --rc genhtml_legend=1 00:19:28.056 --rc geninfo_all_blocks=1 00:19:28.056 --rc geninfo_unexecuted_blocks=1 00:19:28.056 00:19:28.056 ' 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:28.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.056 --rc genhtml_branch_coverage=1 00:19:28.056 --rc genhtml_function_coverage=1 00:19:28.056 --rc genhtml_legend=1 00:19:28.056 --rc geninfo_all_blocks=1 00:19:28.056 --rc geninfo_unexecuted_blocks=1 00:19:28.056 00:19:28.056 ' 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:28.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.056 --rc genhtml_branch_coverage=1 00:19:28.056 --rc genhtml_function_coverage=1 00:19:28.056 --rc genhtml_legend=1 00:19:28.056 --rc geninfo_all_blocks=1 00:19:28.056 --rc geninfo_unexecuted_blocks=1 00:19:28.056 00:19:28.056 ' 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:28.056 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75467 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75467 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75467 ']' 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.057 12:22:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:28.057 [2024-12-05 12:22:58.523173] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:19:28.057 [2024-12-05 12:22:58.523435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75467 ] 00:19:28.057 [2024-12-05 12:22:58.675154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:28.057 [2024-12-05 12:22:58.768016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.057 [2024-12-05 12:22:58.768300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.057 [2024-12-05 12:22:58.768321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.642 12:22:59 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.642 12:22:59 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:19:28.642 12:22:59 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:28.642 12:22:59 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:28.642 12:22:59 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:28.642 12:22:59 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:28.642 12:22:59 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:28.642 12:22:59 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:28.902 12:22:59 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:28.902 12:22:59 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:28.902 12:22:59 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:28.902 12:22:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:28.902 12:22:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:28.902 12:22:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:28.902 12:22:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:28.902 12:22:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:29.163 12:22:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:29.163 { 00:19:29.163 "name": "nvme0n1", 00:19:29.163 "aliases": [ 00:19:29.163 "1895847a-8664-47fb-b8f3-6cf2eb0623f9" 00:19:29.163 ], 00:19:29.163 "product_name": "NVMe disk", 00:19:29.163 "block_size": 4096, 00:19:29.163 "num_blocks": 1310720, 00:19:29.163 "uuid": "1895847a-8664-47fb-b8f3-6cf2eb0623f9", 00:19:29.163 "numa_id": -1, 00:19:29.163 "assigned_rate_limits": { 00:19:29.163 "rw_ios_per_sec": 0, 00:19:29.163 "rw_mbytes_per_sec": 0, 00:19:29.163 "r_mbytes_per_sec": 0, 00:19:29.163 "w_mbytes_per_sec": 0 00:19:29.163 }, 00:19:29.163 "claimed": false, 00:19:29.163 "zoned": false, 00:19:29.163 "supported_io_types": { 00:19:29.163 "read": true, 00:19:29.163 "write": true, 00:19:29.163 "unmap": true, 00:19:29.163 "flush": true, 00:19:29.163 "reset": true, 00:19:29.163 "nvme_admin": true, 00:19:29.163 "nvme_io": true, 00:19:29.163 "nvme_io_md": false, 00:19:29.163 "write_zeroes": true, 00:19:29.163 "zcopy": false, 00:19:29.163 "get_zone_info": false, 00:19:29.163 "zone_management": false, 00:19:29.163 "zone_append": false, 00:19:29.163 "compare": true, 00:19:29.163 "compare_and_write": false, 00:19:29.163 "abort": true, 00:19:29.163 "seek_hole": false, 00:19:29.163 "seek_data": false, 00:19:29.163 "copy": true, 00:19:29.163 "nvme_iov_md": false 00:19:29.163 }, 00:19:29.163 "driver_specific": { 00:19:29.163 "nvme": [ 00:19:29.163 { 00:19:29.163 "pci_address": "0000:00:11.0", 00:19:29.163 "trid": { 00:19:29.163 "trtype": "PCIe", 00:19:29.163 "traddr": "0000:00:11.0" 00:19:29.163 }, 00:19:29.163 "ctrlr_data": { 00:19:29.163 "cntlid": 0, 00:19:29.163 "vendor_id": "0x1b36", 00:19:29.163 "model_number": "QEMU NVMe Ctrl", 00:19:29.163 "serial_number": "12341", 00:19:29.163 "firmware_revision": "8.0.0", 00:19:29.163 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:29.163 "oacs": { 00:19:29.163 "security": 0, 00:19:29.163 "format": 1, 00:19:29.163 "firmware": 0, 00:19:29.163 "ns_manage": 1 00:19:29.163 }, 00:19:29.163 "multi_ctrlr": false, 00:19:29.163 "ana_reporting": false 00:19:29.163 }, 00:19:29.163 "vs": { 00:19:29.163 "nvme_version": "1.4" 00:19:29.163 }, 00:19:29.163 "ns_data": { 00:19:29.163 "id": 1, 00:19:29.163 "can_share": false 00:19:29.163 } 00:19:29.163 } 00:19:29.163 ], 00:19:29.163 "mp_policy": "active_passive" 00:19:29.163 } 00:19:29.163 } 00:19:29.163 ]' 00:19:29.163 12:22:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:29.163 12:22:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:29.163 12:22:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:29.163 12:22:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:29.163 12:22:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:29.163 12:22:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:19:29.163 12:22:59 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:29.163 12:22:59 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:29.163 12:22:59 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:29.163 12:22:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:29.163 12:22:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:29.425 12:23:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:29.425 12:23:00 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:29.686 12:23:00 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=41426b95-e97f-41e1-91a9-b3d0b18db2f0 00:19:29.686 12:23:00 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 41426b95-e97f-41e1-91a9-b3d0b18db2f0 00:19:29.945 12:23:00 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=55c00790-19f3-4cbf-9bdc-c33a80f99d09 00:19:29.945 12:23:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 55c00790-19f3-4cbf-9bdc-c33a80f99d09 00:19:29.945 12:23:00 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:29.945 12:23:00 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:29.945 12:23:00 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=55c00790-19f3-4cbf-9bdc-c33a80f99d09 00:19:29.945 12:23:00 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:29.945 12:23:00 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 55c00790-19f3-4cbf-9bdc-c33a80f99d09 00:19:29.945 12:23:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=55c00790-19f3-4cbf-9bdc-c33a80f99d09 00:19:29.945 12:23:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:29.945 12:23:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:29.945 12:23:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:29.945 12:23:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55c00790-19f3-4cbf-9bdc-c33a80f99d09 00:19:29.945 12:23:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:29.945 { 00:19:29.945 "name": "55c00790-19f3-4cbf-9bdc-c33a80f99d09", 00:19:29.945 "aliases": [ 00:19:29.945 "lvs/nvme0n1p0" 00:19:29.945 ], 00:19:29.945 "product_name": "Logical Volume", 00:19:29.945 "block_size": 4096, 00:19:29.945 "num_blocks": 26476544, 00:19:29.945 "uuid": "55c00790-19f3-4cbf-9bdc-c33a80f99d09", 00:19:29.945 "assigned_rate_limits": { 00:19:29.945 "rw_ios_per_sec": 0, 00:19:29.945 "rw_mbytes_per_sec": 0, 00:19:29.945 "r_mbytes_per_sec": 0, 00:19:29.945 "w_mbytes_per_sec": 0 00:19:29.945 }, 00:19:29.945 "claimed": false, 00:19:29.945 "zoned": false, 00:19:29.945 "supported_io_types": { 00:19:29.945 "read": true, 00:19:29.945 "write": true, 00:19:29.945 "unmap": true, 00:19:29.945 "flush": false, 00:19:29.945 "reset": true, 00:19:29.945 "nvme_admin": false, 00:19:29.945 "nvme_io": false, 00:19:29.945 "nvme_io_md": false, 00:19:29.945 "write_zeroes": true, 00:19:29.945 "zcopy": false, 00:19:29.945 "get_zone_info": false, 00:19:29.945 "zone_management": false, 00:19:29.945 "zone_append": false, 00:19:29.945 "compare": false, 00:19:29.945 "compare_and_write": false, 00:19:29.945 "abort": false, 00:19:29.945 "seek_hole": true, 00:19:29.945 "seek_data": true, 00:19:29.945 "copy": false, 00:19:29.945 "nvme_iov_md": false 00:19:29.945 }, 00:19:29.945 "driver_specific": { 00:19:29.945 "lvol": { 00:19:29.945 "lvol_store_uuid": "41426b95-e97f-41e1-91a9-b3d0b18db2f0", 00:19:29.945 "base_bdev": "nvme0n1", 00:19:29.945 "thin_provision": true, 00:19:29.945 "num_allocated_clusters": 0, 00:19:29.945 "snapshot": false, 00:19:29.945 "clone": false, 00:19:29.945 "esnap_clone": false 00:19:29.945 } 00:19:29.945 } 00:19:29.945 } 00:19:29.946 ]' 00:19:29.946 12:23:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:29.946 12:23:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:29.946 12:23:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:30.204 12:23:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:30.204 12:23:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:30.204 12:23:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:30.204 12:23:00 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:30.204 12:23:00 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:30.204 12:23:00 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:30.463 12:23:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:30.463 12:23:01 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:30.463 12:23:01 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 55c00790-19f3-4cbf-9bdc-c33a80f99d09 00:19:30.463 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=55c00790-19f3-4cbf-9bdc-c33a80f99d09 00:19:30.463 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:30.463 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:30.463 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:30.463 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55c00790-19f3-4cbf-9bdc-c33a80f99d09 00:19:30.463 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:30.463 { 00:19:30.463 "name": "55c00790-19f3-4cbf-9bdc-c33a80f99d09", 00:19:30.463 "aliases": [ 00:19:30.463 "lvs/nvme0n1p0" 00:19:30.463 ], 00:19:30.463 "product_name": "Logical Volume", 00:19:30.463 "block_size": 4096, 00:19:30.463 "num_blocks": 26476544, 00:19:30.463 "uuid": "55c00790-19f3-4cbf-9bdc-c33a80f99d09", 00:19:30.463 "assigned_rate_limits": { 00:19:30.463 "rw_ios_per_sec": 0, 00:19:30.463 "rw_mbytes_per_sec": 0, 00:19:30.463 "r_mbytes_per_sec": 0, 00:19:30.463 "w_mbytes_per_sec": 0 00:19:30.463 }, 00:19:30.463 "claimed": false, 00:19:30.463 "zoned": false, 00:19:30.463 "supported_io_types": { 00:19:30.463 "read": true, 00:19:30.463 "write": true, 00:19:30.463 "unmap": true, 00:19:30.463 "flush": false, 00:19:30.463 "reset": true, 00:19:30.463 "nvme_admin": false, 00:19:30.463 "nvme_io": false, 00:19:30.463 "nvme_io_md": false, 00:19:30.463 "write_zeroes": true, 00:19:30.463 "zcopy": false, 00:19:30.463 "get_zone_info": false, 00:19:30.463 "zone_management": false, 00:19:30.463 "zone_append": false, 00:19:30.463 "compare": false, 00:19:30.463 "compare_and_write": false, 00:19:30.463 "abort": false, 00:19:30.463 "seek_hole": true, 00:19:30.463 "seek_data": true, 00:19:30.463 "copy": false, 00:19:30.463 "nvme_iov_md": false 00:19:30.463 }, 00:19:30.463 "driver_specific": { 00:19:30.463 "lvol": { 00:19:30.463 "lvol_store_uuid": "41426b95-e97f-41e1-91a9-b3d0b18db2f0", 00:19:30.463 "base_bdev": "nvme0n1", 00:19:30.463 "thin_provision": true, 00:19:30.463 "num_allocated_clusters": 0, 00:19:30.463 "snapshot": false, 00:19:30.463 "clone": false, 00:19:30.463 "esnap_clone": false 00:19:30.463 } 00:19:30.463 } 00:19:30.463 } 00:19:30.463 ]' 00:19:30.463 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:30.463 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:30.463 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:30.722 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:30.722 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:30.722 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:30.722 12:23:01 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:30.722 12:23:01 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:30.722 12:23:01 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:30.722 12:23:01 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:30.722 12:23:01 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:30.722 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:30.722 12:23:01 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 55c00790-19f3-4cbf-9bdc-c33a80f99d09 00:19:30.722 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=55c00790-19f3-4cbf-9bdc-c33a80f99d09 00:19:30.722 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:30.722 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:30.722 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:30.722 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55c00790-19f3-4cbf-9bdc-c33a80f99d09 00:19:30.981 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:30.981 { 00:19:30.981 "name": "55c00790-19f3-4cbf-9bdc-c33a80f99d09", 00:19:30.981 "aliases": [ 00:19:30.981 "lvs/nvme0n1p0" 00:19:30.981 ], 00:19:30.981 "product_name": "Logical Volume", 00:19:30.981 "block_size": 4096, 00:19:30.981 "num_blocks": 26476544, 00:19:30.981 "uuid": "55c00790-19f3-4cbf-9bdc-c33a80f99d09", 00:19:30.981 "assigned_rate_limits": { 00:19:30.981 "rw_ios_per_sec": 0, 00:19:30.981 "rw_mbytes_per_sec": 0, 00:19:30.981 "r_mbytes_per_sec": 0, 00:19:30.981 "w_mbytes_per_sec": 0 00:19:30.981 }, 00:19:30.981 "claimed": false, 00:19:30.981 "zoned": false, 00:19:30.981 "supported_io_types": { 00:19:30.981 "read": true, 00:19:30.981 "write": true, 00:19:30.981 "unmap": true, 00:19:30.981 "flush": false, 00:19:30.981 "reset": true, 00:19:30.981 "nvme_admin": false, 00:19:30.981 "nvme_io": false, 00:19:30.981 "nvme_io_md": false, 00:19:30.981 "write_zeroes": true, 00:19:30.981 "zcopy": false, 00:19:30.981 "get_zone_info": false, 00:19:30.981 "zone_management": false, 00:19:30.981 "zone_append": false, 00:19:30.981 "compare": false, 00:19:30.981 "compare_and_write": false, 00:19:30.981 "abort": false, 00:19:30.981 "seek_hole": true, 00:19:30.981 "seek_data": true, 00:19:30.981 "copy": false, 00:19:30.981 "nvme_iov_md": false 00:19:30.981 }, 00:19:30.981 "driver_specific": { 00:19:30.982 "lvol": { 00:19:30.982 "lvol_store_uuid": "41426b95-e97f-41e1-91a9-b3d0b18db2f0", 00:19:30.982 "base_bdev": "nvme0n1", 00:19:30.982 "thin_provision": true, 00:19:30.982 "num_allocated_clusters": 0, 00:19:30.982 "snapshot": false, 00:19:30.982 "clone": false, 00:19:30.982 "esnap_clone": false 00:19:30.982 } 00:19:30.982 } 00:19:30.982 } 00:19:30.982 ]' 00:19:30.982 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:30.982 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:30.982 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:30.982 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:30.982 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:30.982 12:23:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:30.982 12:23:01 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:30.982 12:23:01 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:30.982 12:23:01 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 55c00790-19f3-4cbf-9bdc-c33a80f99d09 -c nvc0n1p0 --l2p_dram_limit 60 00:19:31.242 [2024-12-05 12:23:01.996108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.242 [2024-12-05 12:23:01.996154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:31.242 [2024-12-05 12:23:01.996168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:31.242 [2024-12-05 12:23:01.996176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.242 [2024-12-05 12:23:01.996241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.242 [2024-12-05 12:23:01.996251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.242 [2024-12-05 12:23:01.996260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:31.242 [2024-12-05 12:23:01.996266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.242 [2024-12-05 12:23:01.996309] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:31.242 [2024-12-05 12:23:01.996979] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:31.242 [2024-12-05 12:23:01.997007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.242 [2024-12-05 12:23:01.997014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.242 [2024-12-05 12:23:01.997023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:19:31.242 [2024-12-05 12:23:01.997029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.242 [2024-12-05 12:23:01.997068] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID faca881b-671b-4efe-abbf-5836d1439182 00:19:31.242 [2024-12-05 12:23:01.998416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.242 [2024-12-05 12:23:01.998446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:31.242 [2024-12-05 12:23:01.998456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:31.242 [2024-12-05 12:23:01.998479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.242 [2024-12-05 12:23:02.005320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.242 [2024-12-05 12:23:02.005355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.242 [2024-12-05 12:23:02.005363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.764 ms 00:19:31.242 [2024-12-05 12:23:02.005374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.242 [2024-12-05 12:23:02.005519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.242 [2024-12-05 12:23:02.005534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.242 [2024-12-05 12:23:02.005542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:31.242 [2024-12-05 12:23:02.005552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.242 [2024-12-05 12:23:02.005603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.242 [2024-12-05 12:23:02.005613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:31.242 [2024-12-05 12:23:02.005620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:31.242 [2024-12-05 12:23:02.005628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.242 [2024-12-05 12:23:02.005660] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:31.242 [2024-12-05 12:23:02.008887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.242 [2024-12-05 12:23:02.008913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.242 [2024-12-05 12:23:02.008924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.232 ms 00:19:31.242 [2024-12-05 12:23:02.008932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.242 [2024-12-05 12:23:02.008967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.242 [2024-12-05 12:23:02.008973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:31.242 [2024-12-05 12:23:02.008982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:31.242 [2024-12-05 12:23:02.008988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.242 [2024-12-05 12:23:02.009008] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:31.242 [2024-12-05 12:23:02.009132] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:31.242 [2024-12-05 12:23:02.009145] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:31.242 [2024-12-05 12:23:02.009154] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:31.242 [2024-12-05 12:23:02.009164] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:31.242 [2024-12-05 12:23:02.009171] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:31.242 [2024-12-05 12:23:02.009179] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:31.242 [2024-12-05 12:23:02.009185] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:31.242 [2024-12-05 12:23:02.009192] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:31.242 [2024-12-05 12:23:02.009198] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:31.242 [2024-12-05 12:23:02.009206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.242 [2024-12-05 12:23:02.009213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:31.243 [2024-12-05 12:23:02.009221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:19:31.243 [2024-12-05 12:23:02.009227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.243 [2024-12-05 12:23:02.009300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.243 [2024-12-05 12:23:02.009307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:31.243 [2024-12-05 12:23:02.009314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:31.243 [2024-12-05 12:23:02.009319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.243 [2024-12-05 12:23:02.009425] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:31.243 [2024-12-05 12:23:02.009433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:31.243 [2024-12-05 12:23:02.009443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.243 [2024-12-05 12:23:02.009448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.243 [2024-12-05 12:23:02.009456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:31.243 [2024-12-05 12:23:02.009475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:31.243 [2024-12-05 12:23:02.009483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:31.243 [2024-12-05 12:23:02.009488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:31.243 [2024-12-05 12:23:02.009495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:31.243 [2024-12-05 12:23:02.009500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.243 [2024-12-05 12:23:02.009506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:31.243 [2024-12-05 12:23:02.009512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:31.243 [2024-12-05 12:23:02.009519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.243 [2024-12-05 12:23:02.009525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:31.243 [2024-12-05 12:23:02.009532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:31.243 [2024-12-05 12:23:02.009537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.243 [2024-12-05 12:23:02.009552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:31.243 [2024-12-05 12:23:02.009557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:31.243 [2024-12-05 12:23:02.009564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.243 [2024-12-05 12:23:02.009569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:31.243 [2024-12-05 12:23:02.009575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:31.243 [2024-12-05 12:23:02.009580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.243 [2024-12-05 12:23:02.009586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:31.243 [2024-12-05 12:23:02.009591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:31.243 [2024-12-05 12:23:02.009597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.243 [2024-12-05 12:23:02.009602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:31.243 [2024-12-05 12:23:02.009608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:31.243 [2024-12-05 12:23:02.009613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.243 [2024-12-05 12:23:02.009624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:31.243 [2024-12-05 12:23:02.009629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:31.243 [2024-12-05 12:23:02.009635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.243 [2024-12-05 12:23:02.009640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:31.243 [2024-12-05 12:23:02.009650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:31.243 [2024-12-05 12:23:02.009667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.243 [2024-12-05 12:23:02.009674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:31.243 [2024-12-05 12:23:02.009679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:31.243 [2024-12-05 12:23:02.009685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.243 [2024-12-05 12:23:02.009690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:31.243 [2024-12-05 12:23:02.009697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:31.243 [2024-12-05 12:23:02.009702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.243 [2024-12-05 12:23:02.009708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:31.243 [2024-12-05 12:23:02.009713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:31.243 [2024-12-05 12:23:02.009721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.243 [2024-12-05 12:23:02.009726] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:31.243 [2024-12-05 12:23:02.009733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:31.243 [2024-12-05 12:23:02.009739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.243 [2024-12-05 12:23:02.009746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.243 [2024-12-05 12:23:02.009752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:31.243 [2024-12-05 12:23:02.009761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:31.243 [2024-12-05 12:23:02.009766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:31.243 [2024-12-05 12:23:02.009773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:31.243 [2024-12-05 12:23:02.009778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:31.243 [2024-12-05 12:23:02.009785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:31.243 [2024-12-05 12:23:02.009794] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:31.243 [2024-12-05 12:23:02.009803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.243 [2024-12-05 12:23:02.009810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:31.243 [2024-12-05 12:23:02.009817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:31.243 [2024-12-05 12:23:02.009822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:31.243 [2024-12-05 12:23:02.009830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:31.243 [2024-12-05 12:23:02.009836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:31.243 [2024-12-05 12:23:02.009845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:31.243 [2024-12-05 12:23:02.009851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:31.243 [2024-12-05 12:23:02.009858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:31.243 [2024-12-05 12:23:02.009863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:31.243 [2024-12-05 12:23:02.009871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:31.243 [2024-12-05 12:23:02.009877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:31.243 [2024-12-05 12:23:02.009885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:31.243 [2024-12-05 12:23:02.009890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:31.243 [2024-12-05 12:23:02.009898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:31.243 [2024-12-05 12:23:02.009903] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:31.243 [2024-12-05 12:23:02.009911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.243 [2024-12-05 12:23:02.009919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:31.243 [2024-12-05 12:23:02.009926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:31.243 [2024-12-05 12:23:02.009932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:31.243 [2024-12-05 12:23:02.009939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:31.243 [2024-12-05 12:23:02.009944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.243 [2024-12-05 12:23:02.009951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:31.243 [2024-12-05 12:23:02.009957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:19:31.243 [2024-12-05 12:23:02.009964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.243 [2024-12-05 12:23:02.010034] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:31.243 [2024-12-05 12:23:02.010051] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:33.771 [2024-12-05 12:23:04.416499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.771 [2024-12-05 12:23:04.416748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:33.771 [2024-12-05 12:23:04.416890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2406.454 ms 00:19:33.771 [2024-12-05 12:23:04.416920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.771 [2024-12-05 12:23:04.444683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.771 [2024-12-05 12:23:04.444847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:33.771 [2024-12-05 12:23:04.444911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.501 ms 00:19:33.771 [2024-12-05 12:23:04.444940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.771 [2024-12-05 12:23:04.445082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.771 [2024-12-05 12:23:04.445116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:33.771 [2024-12-05 12:23:04.445204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:33.771 [2024-12-05 12:23:04.445233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.771 [2024-12-05 12:23:04.487687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.771 [2024-12-05 12:23:04.487837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:33.771 [2024-12-05 12:23:04.487907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.396 ms 00:19:33.771 [2024-12-05 12:23:04.487937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.771 [2024-12-05 12:23:04.487988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.771 [2024-12-05 12:23:04.488013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:33.771 [2024-12-05 12:23:04.488034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:33.771 [2024-12-05 12:23:04.488056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.771 [2024-12-05 12:23:04.488578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.771 [2024-12-05 12:23:04.488689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:33.771 [2024-12-05 12:23:04.488746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:19:33.771 [2024-12-05 12:23:04.488774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.771 [2024-12-05 12:23:04.488912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.771 [2024-12-05 12:23:04.488973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:33.771 [2024-12-05 12:23:04.489024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:33.771 [2024-12-05 12:23:04.489047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.771 [2024-12-05 12:23:04.504892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.771 [2024-12-05 12:23:04.505009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:33.771 [2024-12-05 12:23:04.505063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.805 ms 00:19:33.771 [2024-12-05 12:23:04.505088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.771 [2024-12-05 12:23:04.517423] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:33.771 [2024-12-05 12:23:04.534745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.771 [2024-12-05 12:23:04.534854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:33.771 [2024-12-05 12:23:04.534874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.536 ms 00:19:33.772 [2024-12-05 12:23:04.534882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.772 [2024-12-05 12:23:04.587686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.772 [2024-12-05 12:23:04.587724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:33.772 [2024-12-05 12:23:04.587741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.771 ms 00:19:33.772 [2024-12-05 12:23:04.587749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.772 [2024-12-05 12:23:04.587940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.772 [2024-12-05 12:23:04.587950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:33.772 [2024-12-05 12:23:04.587963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:19:33.772 [2024-12-05 12:23:04.587970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.772 [2024-12-05 12:23:04.611027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.772 [2024-12-05 12:23:04.611060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:33.772 [2024-12-05 12:23:04.611074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.005 ms 00:19:33.772 [2024-12-05 12:23:04.611081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.772 [2024-12-05 12:23:04.633554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.772 [2024-12-05 12:23:04.633665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:33.772 [2024-12-05 12:23:04.633684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.428 ms 00:19:33.772 [2024-12-05 12:23:04.633692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.772 [2024-12-05 12:23:04.634275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.772 [2024-12-05 12:23:04.634294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:33.772 [2024-12-05 12:23:04.634304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:19:33.772 [2024-12-05 12:23:04.634312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.030 [2024-12-05 12:23:04.700261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.030 [2024-12-05 12:23:04.700295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:34.030 [2024-12-05 12:23:04.700311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.907 ms 00:19:34.030 [2024-12-05 12:23:04.700322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.030 [2024-12-05 12:23:04.724688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.030 [2024-12-05 12:23:04.724806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:34.030 [2024-12-05 12:23:04.724827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.282 ms 00:19:34.030 [2024-12-05 12:23:04.724836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.030 [2024-12-05 12:23:04.747106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.030 [2024-12-05 12:23:04.747136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:34.030 [2024-12-05 12:23:04.747148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.234 ms 00:19:34.030 [2024-12-05 12:23:04.747156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.030 [2024-12-05 12:23:04.769940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.030 [2024-12-05 12:23:04.769970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:34.030 [2024-12-05 12:23:04.769981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.743 ms 00:19:34.030 [2024-12-05 12:23:04.769989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.030 [2024-12-05 12:23:04.770037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.030 [2024-12-05 12:23:04.770045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:34.030 [2024-12-05 12:23:04.770061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:34.030 [2024-12-05 12:23:04.770068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.030 [2024-12-05 12:23:04.770152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.030 [2024-12-05 12:23:04.770161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:34.030 [2024-12-05 12:23:04.770171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:34.030 [2024-12-05 12:23:04.770179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.030 [2024-12-05 12:23:04.771199] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2774.630 ms, result 0 00:19:34.030 { 00:19:34.030 "name": "ftl0", 00:19:34.030 "uuid": "faca881b-671b-4efe-abbf-5836d1439182" 00:19:34.030 } 00:19:34.030 12:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:34.030 12:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:34.030 12:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:34.030 12:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:19:34.030 12:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:34.030 12:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:34.030 12:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:34.287 12:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:34.544 [ 00:19:34.544 { 00:19:34.544 "name": "ftl0", 00:19:34.544 "aliases": [ 00:19:34.544 "faca881b-671b-4efe-abbf-5836d1439182" 00:19:34.544 ], 00:19:34.544 "product_name": "FTL disk", 00:19:34.544 "block_size": 4096, 00:19:34.544 "num_blocks": 20971520, 00:19:34.544 "uuid": "faca881b-671b-4efe-abbf-5836d1439182", 00:19:34.544 "assigned_rate_limits": { 00:19:34.544 "rw_ios_per_sec": 0, 00:19:34.544 "rw_mbytes_per_sec": 0, 00:19:34.544 "r_mbytes_per_sec": 0, 00:19:34.544 "w_mbytes_per_sec": 0 00:19:34.544 }, 00:19:34.544 "claimed": false, 00:19:34.544 "zoned": false, 00:19:34.544 "supported_io_types": { 00:19:34.544 "read": true, 00:19:34.544 "write": true, 00:19:34.544 "unmap": true, 00:19:34.544 "flush": true, 00:19:34.544 "reset": false, 00:19:34.545 "nvme_admin": false, 00:19:34.545 "nvme_io": false, 00:19:34.545 "nvme_io_md": false, 00:19:34.545 "write_zeroes": true, 00:19:34.545 "zcopy": false, 00:19:34.545 "get_zone_info": false, 00:19:34.545 "zone_management": false, 00:19:34.545 "zone_append": false, 00:19:34.545 "compare": false, 00:19:34.545 "compare_and_write": false, 00:19:34.545 "abort": false, 00:19:34.545 "seek_hole": false, 00:19:34.545 "seek_data": false, 00:19:34.545 "copy": false, 00:19:34.545 "nvme_iov_md": false 00:19:34.545 }, 00:19:34.545 "driver_specific": { 00:19:34.545 "ftl": { 00:19:34.545 "base_bdev": "55c00790-19f3-4cbf-9bdc-c33a80f99d09", 00:19:34.545 "cache": "nvc0n1p0" 00:19:34.545 } 00:19:34.545 } 00:19:34.545 } 00:19:34.545 ] 00:19:34.545 12:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:19:34.545 12:23:05 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:34.545 12:23:05 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:34.545 12:23:05 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:34.545 12:23:05 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:34.802 [2024-12-05 12:23:05.567839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.802 [2024-12-05 12:23:05.567879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:34.802 [2024-12-05 12:23:05.567890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:34.802 [2024-12-05 12:23:05.567899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.803 [2024-12-05 12:23:05.567930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.803 [2024-12-05 12:23:05.570162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.803 [2024-12-05 12:23:05.570187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:34.803 [2024-12-05 12:23:05.570198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.216 ms 00:19:34.803 [2024-12-05 12:23:05.570204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.803 [2024-12-05 12:23:05.570621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.803 [2024-12-05 12:23:05.570635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:34.803 [2024-12-05 12:23:05.570645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:19:34.803 [2024-12-05 12:23:05.570652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.803 [2024-12-05 12:23:05.573104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.803 [2024-12-05 12:23:05.573124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:34.803 [2024-12-05 12:23:05.573133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.428 ms 00:19:34.803 [2024-12-05 12:23:05.573139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.803 [2024-12-05 12:23:05.577977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.803 [2024-12-05 12:23:05.580440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:34.803 [2024-12-05 12:23:05.580459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.814 ms 00:19:34.803 [2024-12-05 12:23:05.580481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.803 [2024-12-05 12:23:05.599145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.803 [2024-12-05 12:23:05.599249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:34.803 [2024-12-05 12:23:05.599277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.588 ms 00:19:34.803 [2024-12-05 12:23:05.599284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.803 [2024-12-05 12:23:05.612120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.803 [2024-12-05 12:23:05.612148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:34.803 [2024-12-05 12:23:05.612162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.800 ms 00:19:34.803 [2024-12-05 12:23:05.612169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.803 [2024-12-05 12:23:05.612319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.803 [2024-12-05 12:23:05.612327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:34.803 [2024-12-05 12:23:05.612335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:19:34.803 [2024-12-05 12:23:05.612342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.803 [2024-12-05 12:23:05.630276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.803 [2024-12-05 12:23:05.630368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:34.803 [2024-12-05 12:23:05.630383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.914 ms 00:19:34.803 [2024-12-05 12:23:05.630389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.803 [2024-12-05 12:23:05.647634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.803 [2024-12-05 12:23:05.647658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:34.803 [2024-12-05 12:23:05.647667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.210 ms 00:19:34.803 [2024-12-05 12:23:05.647673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.803 [2024-12-05 12:23:05.664605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.803 [2024-12-05 12:23:05.664693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:34.803 [2024-12-05 12:23:05.664707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.895 ms 00:19:34.803 [2024-12-05 12:23:05.664712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.062 [2024-12-05 12:23:05.681588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.062 [2024-12-05 12:23:05.681617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:35.062 [2024-12-05 12:23:05.681626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.792 ms 00:19:35.062 [2024-12-05 12:23:05.681632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.062 [2024-12-05 12:23:05.681668] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:35.062 [2024-12-05 12:23:05.681680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.681993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:35.062 [2024-12-05 12:23:05.682162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:35.063 [2024-12-05 12:23:05.682380] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:35.063 [2024-12-05 12:23:05.682388] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: faca881b-671b-4efe-abbf-5836d1439182 00:19:35.063 [2024-12-05 12:23:05.682394] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:35.063 [2024-12-05 12:23:05.682403] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:35.063 [2024-12-05 12:23:05.682409] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:35.063 [2024-12-05 12:23:05.682419] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:35.063 [2024-12-05 12:23:05.682424] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:35.063 [2024-12-05 12:23:05.682432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:35.063 [2024-12-05 12:23:05.682438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:35.063 [2024-12-05 12:23:05.682444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:35.063 [2024-12-05 12:23:05.682449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:35.063 [2024-12-05 12:23:05.682456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.063 [2024-12-05 12:23:05.682475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:35.063 [2024-12-05 12:23:05.682485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:19:35.063 [2024-12-05 12:23:05.682490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.692539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.063 [2024-12-05 12:23:05.692566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:35.063 [2024-12-05 12:23:05.692575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.015 ms 00:19:35.063 [2024-12-05 12:23:05.692582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.692870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.063 [2024-12-05 12:23:05.692880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:35.063 [2024-12-05 12:23:05.692889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:19:35.063 [2024-12-05 12:23:05.692894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.729210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.063 [2024-12-05 12:23:05.729239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.063 [2024-12-05 12:23:05.729249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.063 [2024-12-05 12:23:05.729255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.729307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.063 [2024-12-05 12:23:05.729314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.063 [2024-12-05 12:23:05.729328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.063 [2024-12-05 12:23:05.729335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.729418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.063 [2024-12-05 12:23:05.729429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.063 [2024-12-05 12:23:05.729437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.063 [2024-12-05 12:23:05.729443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.729480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.063 [2024-12-05 12:23:05.729487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.063 [2024-12-05 12:23:05.729495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.063 [2024-12-05 12:23:05.729501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.795180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.063 [2024-12-05 12:23:05.795220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.063 [2024-12-05 12:23:05.795231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.063 [2024-12-05 12:23:05.795238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.845559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.063 [2024-12-05 12:23:05.845596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.063 [2024-12-05 12:23:05.845607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.063 [2024-12-05 12:23:05.845613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.845687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.063 [2024-12-05 12:23:05.845695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.063 [2024-12-05 12:23:05.845706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.063 [2024-12-05 12:23:05.845712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.845788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.063 [2024-12-05 12:23:05.845796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.063 [2024-12-05 12:23:05.845804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.063 [2024-12-05 12:23:05.845810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.845903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.063 [2024-12-05 12:23:05.845912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.063 [2024-12-05 12:23:05.845919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.063 [2024-12-05 12:23:05.845927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.845973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.063 [2024-12-05 12:23:05.845980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:35.063 [2024-12-05 12:23:05.845988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.063 [2024-12-05 12:23:05.845994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.846037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.063 [2024-12-05 12:23:05.846045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.063 [2024-12-05 12:23:05.846053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.063 [2024-12-05 12:23:05.846060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.846110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.063 [2024-12-05 12:23:05.846118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.063 [2024-12-05 12:23:05.846126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.063 [2024-12-05 12:23:05.846133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.063 [2024-12-05 12:23:05.846273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 278.410 ms, result 0 00:19:35.063 true 00:19:35.063 12:23:05 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75467 00:19:35.063 12:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75467 ']' 00:19:35.063 12:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75467 00:19:35.063 12:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:19:35.063 12:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.064 12:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75467 00:19:35.064 killing process with pid 75467 00:19:35.064 12:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.064 12:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.064 12:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75467' 00:19:35.064 12:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75467 00:19:35.064 12:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75467 00:19:41.619 12:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:19:41.619 12:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:41.620 12:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:41.620 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:19:41.620 fio-3.35 00:19:41.620 Starting 1 thread 00:19:48.189 00:19:48.190 test: (groupid=0, jobs=1): err= 0: pid=75652: Thu Dec 5 12:23:18 2024 00:19:48.190 read: IOPS=784, BW=52.1MiB/s (54.6MB/s)(255MiB/4885msec) 00:19:48.190 slat (nsec): min=3007, max=18557, avg=3892.22, stdev=1699.48 00:19:48.190 clat (usec): min=289, max=1238, avg=580.98, stdev=162.22 00:19:48.190 lat (usec): min=295, max=1243, avg=584.87, stdev=162.30 00:19:48.190 clat percentiles (usec): 00:19:48.190 | 1.00th=[ 322], 5.00th=[ 392], 10.00th=[ 404], 20.00th=[ 478], 00:19:48.190 | 30.00th=[ 523], 40.00th=[ 529], 50.00th=[ 529], 60.00th=[ 529], 00:19:48.190 | 70.00th=[ 545], 80.00th=[ 791], 90.00th=[ 865], 95.00th=[ 873], 00:19:48.190 | 99.00th=[ 996], 99.50th=[ 1057], 99.90th=[ 1172], 99.95th=[ 1237], 00:19:48.190 | 99.99th=[ 1237] 00:19:48.190 write: IOPS=790, BW=52.5MiB/s (55.1MB/s)(256MiB/4877msec); 0 zone resets 00:19:48.190 slat (nsec): min=13522, max=50991, avg=17701.91, stdev=2893.06 00:19:48.190 clat (usec): min=322, max=3000, avg=657.53, stdev=184.81 00:19:48.190 lat (usec): min=350, max=3018, avg=675.23, stdev=184.67 00:19:48.190 clat percentiles (usec): 00:19:48.190 | 1.00th=[ 375], 5.00th=[ 474], 10.00th=[ 482], 20.00th=[ 545], 00:19:48.190 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 619], 00:19:48.190 | 70.00th=[ 627], 80.00th=[ 881], 90.00th=[ 947], 95.00th=[ 955], 00:19:48.190 | 99.00th=[ 1123], 99.50th=[ 1254], 99.90th=[ 1860], 99.95th=[ 2573], 00:19:48.190 | 99.99th=[ 2999] 00:19:48.190 bw ( KiB/s): min=45016, max=63240, per=100.00%, avg=54022.22, stdev=6516.58, samples=9 00:19:48.190 iops : min= 662, max= 930, avg=794.44, stdev=95.83, samples=9 00:19:48.190 lat (usec) : 500=20.22%, 750=56.95%, 1000=21.19% 00:19:48.190 lat (msec) : 2=1.60%, 4=0.04% 00:19:48.190 cpu : usr=99.37%, sys=0.04%, ctx=7, majf=0, minf=1169 00:19:48.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.190 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:48.190 00:19:48.190 Run status group 0 (all jobs): 00:19:48.190 READ: bw=52.1MiB/s (54.6MB/s), 52.1MiB/s-52.1MiB/s (54.6MB/s-54.6MB/s), io=255MiB (267MB), run=4885-4885msec 00:19:48.190 WRITE: bw=52.5MiB/s (55.1MB/s), 52.5MiB/s-52.5MiB/s (55.1MB/s-55.1MB/s), io=256MiB (269MB), run=4877-4877msec 00:19:49.576 ----------------------------------------------------- 00:19:49.576 Suppressions used: 00:19:49.576 count bytes template 00:19:49.576 1 5 /usr/src/fio/parse.c 00:19:49.576 1 8 libtcmalloc_minimal.so 00:19:49.576 1 904 libcrypto.so 00:19:49.576 ----------------------------------------------------- 00:19:49.576 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:49.576 12:23:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:49.576 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:49.576 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:49.576 fio-3.35 00:19:49.576 Starting 2 threads 00:20:16.124 00:20:16.124 first_half: (groupid=0, jobs=1): err= 0: pid=75764: Thu Dec 5 12:23:46 2024 00:20:16.124 read: IOPS=2604, BW=10.2MiB/s (10.7MB/s)(256MiB/25143msec) 00:20:16.124 slat (nsec): min=3044, max=51252, avg=3744.77, stdev=877.76 00:20:16.124 clat (usec): min=1004, max=343929, avg=40058.30, stdev=32290.78 00:20:16.124 lat (usec): min=1009, max=343933, avg=40062.04, stdev=32290.83 00:20:16.124 clat percentiles (msec): 00:20:16.124 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 30], 00:20:16.124 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 34], 00:20:16.124 | 70.00th=[ 37], 80.00th=[ 40], 90.00th=[ 45], 95.00th=[ 89], 00:20:16.124 | 99.00th=[ 213], 99.50th=[ 239], 99.90th=[ 279], 99.95th=[ 305], 00:20:16.124 | 99.99th=[ 338] 00:20:16.124 write: IOPS=2609, BW=10.2MiB/s (10.7MB/s)(256MiB/25112msec); 0 zone resets 00:20:16.124 slat (usec): min=3, max=986, avg= 5.25, stdev= 7.03 00:20:16.124 clat (usec): min=407, max=53564, avg=9066.71, stdev=8667.14 00:20:16.124 lat (usec): min=412, max=53569, avg=9071.97, stdev=8667.55 00:20:16.124 clat percentiles (usec): 00:20:16.124 | 1.00th=[ 1139], 5.00th=[ 1565], 10.00th=[ 2040], 20.00th=[ 3556], 00:20:16.124 | 30.00th=[ 4948], 40.00th=[ 5669], 50.00th=[ 6652], 60.00th=[ 7701], 00:20:16.124 | 70.00th=[ 8979], 80.00th=[11338], 90.00th=[19006], 95.00th=[25822], 00:20:16.124 | 99.00th=[45876], 99.50th=[47449], 99.90th=[51119], 99.95th=[52167], 00:20:16.124 | 99.99th=[53216] 00:20:16.124 bw ( KiB/s): min= 4936, max=45312, per=99.79%, avg=20835.80, stdev=11448.29, samples=25 00:20:16.124 iops : min= 1234, max=11328, avg=5208.92, stdev=2862.07, samples=25 00:20:16.124 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.22% 00:20:16.124 lat (msec) : 2=4.55%, 4=6.76%, 10=27.09%, 20=8.40%, 50=48.94% 00:20:16.124 lat (msec) : 100=1.74%, 250=2.12%, 500=0.16% 00:20:16.124 cpu : usr=99.24%, sys=0.12%, ctx=47, majf=0, minf=5532 00:20:16.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:16.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.124 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:16.124 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:16.124 second_half: (groupid=0, jobs=1): err= 0: pid=75765: Thu Dec 5 12:23:46 2024 00:20:16.124 read: IOPS=2627, BW=10.3MiB/s (10.8MB/s)(256MiB/24920msec) 00:20:16.124 slat (nsec): min=2989, max=37584, avg=3748.56, stdev=845.97 00:20:16.124 clat (msec): min=12, max=275, avg=40.65, stdev=30.06 00:20:16.124 lat (msec): min=12, max=275, avg=40.66, stdev=30.06 00:20:16.124 clat percentiles (msec): 00:20:16.124 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 30], 00:20:16.124 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 35], 00:20:16.124 | 70.00th=[ 37], 80.00th=[ 41], 90.00th=[ 48], 95.00th=[ 83], 00:20:16.124 | 99.00th=[ 207], 99.50th=[ 228], 99.90th=[ 249], 99.95th=[ 257], 00:20:16.124 | 99.99th=[ 271] 00:20:16.124 write: IOPS=2640, BW=10.3MiB/s (10.8MB/s)(256MiB/24816msec); 0 zone resets 00:20:16.124 slat (usec): min=3, max=1198, avg= 5.26, stdev= 9.17 00:20:16.124 clat (usec): min=377, max=49788, avg=8033.16, stdev=5535.80 00:20:16.124 lat (usec): min=384, max=49793, avg=8038.42, stdev=5536.68 00:20:16.124 clat percentiles (usec): 00:20:16.124 | 1.00th=[ 1336], 5.00th=[ 2212], 10.00th=[ 3032], 20.00th=[ 4113], 00:20:16.124 | 30.00th=[ 5014], 40.00th=[ 5866], 50.00th=[ 6587], 60.00th=[ 7570], 00:20:16.124 | 70.00th=[ 8717], 80.00th=[10552], 90.00th=[15533], 95.00th=[20055], 00:20:16.124 | 99.00th=[26608], 99.50th=[32637], 99.90th=[45351], 99.95th=[47449], 00:20:16.124 | 99.99th=[49021] 00:20:16.124 bw ( KiB/s): min= 832, max=47496, per=100.00%, avg=22625.17, stdev=15331.00, samples=23 00:20:16.124 iops : min= 208, max=11874, avg=5656.26, stdev=3832.80, samples=23 00:20:16.124 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.12% 00:20:16.124 lat (msec) : 2=1.76%, 4=7.66%, 10=29.01%, 20=8.95%, 50=47.89% 00:20:16.124 lat (msec) : 100=2.40%, 250=2.09%, 500=0.05% 00:20:16.124 cpu : usr=99.43%, sys=0.11%, ctx=35, majf=0, minf=5583 00:20:16.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:16.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.124 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:16.124 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:16.124 00:20:16.124 Run status group 0 (all jobs): 00:20:16.124 READ: bw=20.3MiB/s (21.3MB/s), 10.2MiB/s-10.3MiB/s (10.7MB/s-10.8MB/s), io=512MiB (536MB), run=24920-25143msec 00:20:16.124 WRITE: bw=20.4MiB/s (21.4MB/s), 10.2MiB/s-10.3MiB/s (10.7MB/s-10.8MB/s), io=512MiB (537MB), run=24816-25112msec 00:20:18.041 ----------------------------------------------------- 00:20:18.041 Suppressions used: 00:20:18.041 count bytes template 00:20:18.041 2 10 /usr/src/fio/parse.c 00:20:18.041 3 288 /usr/src/fio/iolog.c 00:20:18.041 1 8 libtcmalloc_minimal.so 00:20:18.041 1 904 libcrypto.so 00:20:18.041 ----------------------------------------------------- 00:20:18.041 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:18.041 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:18.301 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:18.301 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:18.301 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:18.301 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:18.301 12:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:18.301 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:18.301 fio-3.35 00:20:18.301 Starting 1 thread 00:20:36.495 00:20:36.495 test: (groupid=0, jobs=1): err= 0: pid=76100: Thu Dec 5 12:24:06 2024 00:20:36.495 read: IOPS=6966, BW=27.2MiB/s (28.5MB/s)(255MiB/9359msec) 00:20:36.495 slat (nsec): min=3096, max=32136, avg=4911.78, stdev=1209.25 00:20:36.495 clat (usec): min=1423, max=50466, avg=18364.12, stdev=2758.36 00:20:36.495 lat (usec): min=1432, max=50471, avg=18369.03, stdev=2758.34 00:20:36.495 clat percentiles (usec): 00:20:36.495 | 1.00th=[14877], 5.00th=[15270], 10.00th=[15533], 20.00th=[15926], 00:20:36.495 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17957], 60.00th=[19006], 00:20:36.495 | 70.00th=[19792], 80.00th=[20579], 90.00th=[21627], 95.00th=[22938], 00:20:36.495 | 99.00th=[25560], 99.50th=[26870], 99.90th=[40109], 99.95th=[45351], 00:20:36.495 | 99.99th=[49546] 00:20:36.495 write: IOPS=9255, BW=36.2MiB/s (37.9MB/s)(256MiB/7081msec); 0 zone resets 00:20:36.495 slat (usec): min=4, max=395, avg= 7.48, stdev= 4.66 00:20:36.495 clat (usec): min=611, max=77308, avg=13762.72, stdev=15578.57 00:20:36.495 lat (usec): min=617, max=77314, avg=13770.21, stdev=15578.63 00:20:36.495 clat percentiles (usec): 00:20:36.495 | 1.00th=[ 1057], 5.00th=[ 1369], 10.00th=[ 1582], 20.00th=[ 1909], 00:20:36.495 | 30.00th=[ 2212], 40.00th=[ 2933], 50.00th=[ 9110], 60.00th=[11863], 00:20:36.495 | 70.00th=[15139], 80.00th=[17957], 90.00th=[45876], 95.00th=[49021], 00:20:36.495 | 99.00th=[52691], 99.50th=[54264], 99.90th=[56886], 99.95th=[64750], 00:20:36.495 | 99.99th=[72877] 00:20:36.495 bw ( KiB/s): min= 4008, max=50576, per=94.41%, avg=34952.53, stdev=10196.60, samples=15 00:20:36.495 iops : min= 1002, max=12644, avg=8738.13, stdev=2549.15, samples=15 00:20:36.495 lat (usec) : 750=0.03%, 1000=0.28% 00:20:36.495 lat (msec) : 2=11.31%, 4=9.12%, 10=6.06%, 20=51.25%, 50=20.33% 00:20:36.495 lat (msec) : 100=1.61% 00:20:36.495 cpu : usr=99.04%, sys=0.18%, ctx=33, majf=0, minf=5565 00:20:36.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:36.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.495 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:36.495 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:36.495 00:20:36.495 Run status group 0 (all jobs): 00:20:36.495 READ: bw=27.2MiB/s (28.5MB/s), 27.2MiB/s-27.2MiB/s (28.5MB/s-28.5MB/s), io=255MiB (267MB), run=9359-9359msec 00:20:36.495 WRITE: bw=36.2MiB/s (37.9MB/s), 36.2MiB/s-36.2MiB/s (37.9MB/s-37.9MB/s), io=256MiB (268MB), run=7081-7081msec 00:20:37.883 ----------------------------------------------------- 00:20:37.883 Suppressions used: 00:20:37.883 count bytes template 00:20:37.883 1 5 /usr/src/fio/parse.c 00:20:37.883 2 192 /usr/src/fio/iolog.c 00:20:37.883 1 8 libtcmalloc_minimal.so 00:20:37.883 1 904 libcrypto.so 00:20:37.883 ----------------------------------------------------- 00:20:37.883 00:20:37.883 12:24:08 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:20:37.883 12:24:08 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.883 12:24:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:38.145 12:24:08 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:38.145 Remove shared memory files 00:20:38.145 12:24:08 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:20:38.145 12:24:08 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:38.145 12:24:08 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:20:38.145 12:24:08 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:20:38.145 12:24:08 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57278 /dev/shm/spdk_tgt_trace.pid74394 00:20:38.145 12:24:08 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:38.145 12:24:08 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:20:38.145 ************************************ 00:20:38.145 END TEST ftl_fio_basic 00:20:38.145 ************************************ 00:20:38.145 00:20:38.145 real 1m10.507s 00:20:38.145 user 2m32.688s 00:20:38.145 sys 0m3.328s 00:20:38.145 12:24:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.145 12:24:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:38.145 12:24:08 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:38.145 12:24:08 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:38.145 12:24:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.145 12:24:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:38.145 ************************************ 00:20:38.145 START TEST ftl_bdevperf 00:20:38.145 ************************************ 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:38.145 * Looking for test storage... 00:20:38.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:38.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.145 --rc genhtml_branch_coverage=1 00:20:38.145 --rc genhtml_function_coverage=1 00:20:38.145 --rc genhtml_legend=1 00:20:38.145 --rc geninfo_all_blocks=1 00:20:38.145 --rc geninfo_unexecuted_blocks=1 00:20:38.145 00:20:38.145 ' 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:38.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.145 --rc genhtml_branch_coverage=1 00:20:38.145 --rc genhtml_function_coverage=1 00:20:38.145 --rc genhtml_legend=1 00:20:38.145 --rc geninfo_all_blocks=1 00:20:38.145 --rc geninfo_unexecuted_blocks=1 00:20:38.145 00:20:38.145 ' 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:38.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.145 --rc genhtml_branch_coverage=1 00:20:38.145 --rc genhtml_function_coverage=1 00:20:38.145 --rc genhtml_legend=1 00:20:38.145 --rc geninfo_all_blocks=1 00:20:38.145 --rc geninfo_unexecuted_blocks=1 00:20:38.145 00:20:38.145 ' 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:38.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.145 --rc genhtml_branch_coverage=1 00:20:38.145 --rc genhtml_function_coverage=1 00:20:38.145 --rc genhtml_legend=1 00:20:38.145 --rc geninfo_all_blocks=1 00:20:38.145 --rc geninfo_unexecuted_blocks=1 00:20:38.145 00:20:38.145 ' 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:20:38.145 12:24:08 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:38.145 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:38.146 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:38.146 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:38.146 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:38.146 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:38.146 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:38.146 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:38.146 12:24:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:20:38.146 12:24:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:20:38.146 12:24:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:20:38.146 12:24:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:38.408 12:24:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:20:38.408 12:24:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76371 00:20:38.408 12:24:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:20:38.408 12:24:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:20:38.408 12:24:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76371 00:20:38.408 12:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76371 ']' 00:20:38.408 12:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.408 12:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.408 12:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.408 12:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.408 12:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:38.408 [2024-12-05 12:24:09.080649] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:20:38.408 [2024-12-05 12:24:09.080935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76371 ] 00:20:38.408 [2024-12-05 12:24:09.241086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.670 [2024-12-05 12:24:09.376329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.242 12:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.242 12:24:09 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:20:39.242 12:24:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:39.242 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:20:39.242 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:39.242 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:20:39.242 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:20:39.242 12:24:09 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:39.504 12:24:10 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:39.504 12:24:10 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:20:39.504 12:24:10 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:39.504 12:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:39.504 12:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:39.504 12:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:39.504 12:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:39.504 12:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:39.767 12:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:39.767 { 00:20:39.767 "name": "nvme0n1", 00:20:39.767 "aliases": [ 00:20:39.767 "a82c045b-5922-4e8b-9fad-7d6b4910008a" 00:20:39.767 ], 00:20:39.767 "product_name": "NVMe disk", 00:20:39.767 "block_size": 4096, 00:20:39.767 "num_blocks": 1310720, 00:20:39.767 "uuid": "a82c045b-5922-4e8b-9fad-7d6b4910008a", 00:20:39.767 "numa_id": -1, 00:20:39.767 "assigned_rate_limits": { 00:20:39.767 "rw_ios_per_sec": 0, 00:20:39.767 "rw_mbytes_per_sec": 0, 00:20:39.767 "r_mbytes_per_sec": 0, 00:20:39.767 "w_mbytes_per_sec": 0 00:20:39.767 }, 00:20:39.767 "claimed": true, 00:20:39.767 "claim_type": "read_many_write_one", 00:20:39.767 "zoned": false, 00:20:39.767 "supported_io_types": { 00:20:39.767 "read": true, 00:20:39.767 "write": true, 00:20:39.767 "unmap": true, 00:20:39.767 "flush": true, 00:20:39.767 "reset": true, 00:20:39.767 "nvme_admin": true, 00:20:39.767 "nvme_io": true, 00:20:39.767 "nvme_io_md": false, 00:20:39.767 "write_zeroes": true, 00:20:39.767 "zcopy": false, 00:20:39.767 "get_zone_info": false, 00:20:39.767 "zone_management": false, 00:20:39.767 "zone_append": false, 00:20:39.767 "compare": true, 00:20:39.767 "compare_and_write": false, 00:20:39.767 "abort": true, 00:20:39.767 "seek_hole": false, 00:20:39.767 "seek_data": false, 00:20:39.767 "copy": true, 00:20:39.767 "nvme_iov_md": false 00:20:39.767 }, 00:20:39.767 "driver_specific": { 00:20:39.767 "nvme": [ 00:20:39.767 { 00:20:39.767 "pci_address": "0000:00:11.0", 00:20:39.767 "trid": { 00:20:39.767 "trtype": "PCIe", 00:20:39.767 "traddr": "0000:00:11.0" 00:20:39.767 }, 00:20:39.767 "ctrlr_data": { 00:20:39.767 "cntlid": 0, 00:20:39.767 "vendor_id": "0x1b36", 00:20:39.767 "model_number": "QEMU NVMe Ctrl", 00:20:39.767 "serial_number": "12341", 00:20:39.767 "firmware_revision": "8.0.0", 00:20:39.767 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:39.767 "oacs": { 00:20:39.767 "security": 0, 00:20:39.767 "format": 1, 00:20:39.767 "firmware": 0, 00:20:39.767 "ns_manage": 1 00:20:39.767 }, 00:20:39.767 "multi_ctrlr": false, 00:20:39.767 "ana_reporting": false 00:20:39.767 }, 00:20:39.767 "vs": { 00:20:39.767 "nvme_version": "1.4" 00:20:39.767 }, 00:20:39.767 "ns_data": { 00:20:39.767 "id": 1, 00:20:39.767 "can_share": false 00:20:39.767 } 00:20:39.767 } 00:20:39.767 ], 00:20:39.767 "mp_policy": "active_passive" 00:20:39.767 } 00:20:39.767 } 00:20:39.767 ]' 00:20:39.767 12:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:39.767 12:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:39.767 12:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:39.767 12:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:39.767 12:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:39.767 12:24:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:20:39.767 12:24:10 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:20:39.767 12:24:10 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:39.767 12:24:10 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:20:39.767 12:24:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:39.767 12:24:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:40.028 12:24:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=41426b95-e97f-41e1-91a9-b3d0b18db2f0 00:20:40.028 12:24:10 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:20:40.028 12:24:10 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 41426b95-e97f-41e1-91a9-b3d0b18db2f0 00:20:40.289 12:24:11 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:40.551 12:24:11 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=d8fc6c8c-b86c-43bf-9465-75e8466eb1b2 00:20:40.551 12:24:11 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d8fc6c8c-b86c-43bf-9465-75e8466eb1b2 00:20:40.812 12:24:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=d579df28-ef85-4e9f-a2ef-8d3d3a18b38e 00:20:40.812 12:24:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d579df28-ef85-4e9f-a2ef-8d3d3a18b38e 00:20:40.812 12:24:11 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:20:40.812 12:24:11 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:40.812 12:24:11 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=d579df28-ef85-4e9f-a2ef-8d3d3a18b38e 00:20:40.812 12:24:11 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:20:40.812 12:24:11 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size d579df28-ef85-4e9f-a2ef-8d3d3a18b38e 00:20:40.812 12:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=d579df28-ef85-4e9f-a2ef-8d3d3a18b38e 00:20:40.812 12:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:40.812 12:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:40.812 12:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:40.812 12:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d579df28-ef85-4e9f-a2ef-8d3d3a18b38e 00:20:41.074 12:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:41.074 { 00:20:41.074 "name": "d579df28-ef85-4e9f-a2ef-8d3d3a18b38e", 00:20:41.074 "aliases": [ 00:20:41.074 "lvs/nvme0n1p0" 00:20:41.074 ], 00:20:41.074 "product_name": "Logical Volume", 00:20:41.074 "block_size": 4096, 00:20:41.074 "num_blocks": 26476544, 00:20:41.074 "uuid": "d579df28-ef85-4e9f-a2ef-8d3d3a18b38e", 00:20:41.074 "assigned_rate_limits": { 00:20:41.074 "rw_ios_per_sec": 0, 00:20:41.074 "rw_mbytes_per_sec": 0, 00:20:41.074 "r_mbytes_per_sec": 0, 00:20:41.074 "w_mbytes_per_sec": 0 00:20:41.074 }, 00:20:41.074 "claimed": false, 00:20:41.074 "zoned": false, 00:20:41.074 "supported_io_types": { 00:20:41.074 "read": true, 00:20:41.074 "write": true, 00:20:41.074 "unmap": true, 00:20:41.074 "flush": false, 00:20:41.074 "reset": true, 00:20:41.074 "nvme_admin": false, 00:20:41.074 "nvme_io": false, 00:20:41.074 "nvme_io_md": false, 00:20:41.074 "write_zeroes": true, 00:20:41.074 "zcopy": false, 00:20:41.074 "get_zone_info": false, 00:20:41.074 "zone_management": false, 00:20:41.074 "zone_append": false, 00:20:41.074 "compare": false, 00:20:41.074 "compare_and_write": false, 00:20:41.074 "abort": false, 00:20:41.074 "seek_hole": true, 00:20:41.074 "seek_data": true, 00:20:41.074 "copy": false, 00:20:41.074 "nvme_iov_md": false 00:20:41.074 }, 00:20:41.074 "driver_specific": { 00:20:41.074 "lvol": { 00:20:41.074 "lvol_store_uuid": "d8fc6c8c-b86c-43bf-9465-75e8466eb1b2", 00:20:41.074 "base_bdev": "nvme0n1", 00:20:41.074 "thin_provision": true, 00:20:41.074 "num_allocated_clusters": 0, 00:20:41.074 "snapshot": false, 00:20:41.074 "clone": false, 00:20:41.074 "esnap_clone": false 00:20:41.074 } 00:20:41.074 } 00:20:41.074 } 00:20:41.074 ]' 00:20:41.074 12:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:41.074 12:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:41.074 12:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:41.074 12:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:41.074 12:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:41.074 12:24:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:41.074 12:24:11 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:20:41.074 12:24:11 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:20:41.074 12:24:11 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:41.336 12:24:12 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:41.336 12:24:12 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:41.336 12:24:12 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size d579df28-ef85-4e9f-a2ef-8d3d3a18b38e 00:20:41.336 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=d579df28-ef85-4e9f-a2ef-8d3d3a18b38e 00:20:41.336 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:41.336 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:41.336 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:41.336 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d579df28-ef85-4e9f-a2ef-8d3d3a18b38e 00:20:41.598 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:41.598 { 00:20:41.598 "name": "d579df28-ef85-4e9f-a2ef-8d3d3a18b38e", 00:20:41.598 "aliases": [ 00:20:41.598 "lvs/nvme0n1p0" 00:20:41.598 ], 00:20:41.598 "product_name": "Logical Volume", 00:20:41.598 "block_size": 4096, 00:20:41.598 "num_blocks": 26476544, 00:20:41.598 "uuid": "d579df28-ef85-4e9f-a2ef-8d3d3a18b38e", 00:20:41.598 "assigned_rate_limits": { 00:20:41.598 "rw_ios_per_sec": 0, 00:20:41.598 "rw_mbytes_per_sec": 0, 00:20:41.598 "r_mbytes_per_sec": 0, 00:20:41.598 "w_mbytes_per_sec": 0 00:20:41.598 }, 00:20:41.598 "claimed": false, 00:20:41.598 "zoned": false, 00:20:41.598 "supported_io_types": { 00:20:41.598 "read": true, 00:20:41.598 "write": true, 00:20:41.598 "unmap": true, 00:20:41.598 "flush": false, 00:20:41.598 "reset": true, 00:20:41.598 "nvme_admin": false, 00:20:41.598 "nvme_io": false, 00:20:41.598 "nvme_io_md": false, 00:20:41.598 "write_zeroes": true, 00:20:41.598 "zcopy": false, 00:20:41.598 "get_zone_info": false, 00:20:41.598 "zone_management": false, 00:20:41.598 "zone_append": false, 00:20:41.598 "compare": false, 00:20:41.598 "compare_and_write": false, 00:20:41.598 "abort": false, 00:20:41.598 "seek_hole": true, 00:20:41.598 "seek_data": true, 00:20:41.598 "copy": false, 00:20:41.598 "nvme_iov_md": false 00:20:41.598 }, 00:20:41.598 "driver_specific": { 00:20:41.598 "lvol": { 00:20:41.598 "lvol_store_uuid": "d8fc6c8c-b86c-43bf-9465-75e8466eb1b2", 00:20:41.598 "base_bdev": "nvme0n1", 00:20:41.598 "thin_provision": true, 00:20:41.598 "num_allocated_clusters": 0, 00:20:41.598 "snapshot": false, 00:20:41.598 "clone": false, 00:20:41.598 "esnap_clone": false 00:20:41.598 } 00:20:41.598 } 00:20:41.598 } 00:20:41.598 ]' 00:20:41.598 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:41.598 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:41.598 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:41.598 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:41.598 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:41.598 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:41.598 12:24:12 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:20:41.598 12:24:12 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:41.860 12:24:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:20:41.860 12:24:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size d579df28-ef85-4e9f-a2ef-8d3d3a18b38e 00:20:41.860 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=d579df28-ef85-4e9f-a2ef-8d3d3a18b38e 00:20:41.860 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:41.860 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:41.860 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:41.860 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d579df28-ef85-4e9f-a2ef-8d3d3a18b38e 00:20:42.122 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:42.122 { 00:20:42.122 "name": "d579df28-ef85-4e9f-a2ef-8d3d3a18b38e", 00:20:42.122 "aliases": [ 00:20:42.122 "lvs/nvme0n1p0" 00:20:42.122 ], 00:20:42.122 "product_name": "Logical Volume", 00:20:42.122 "block_size": 4096, 00:20:42.122 "num_blocks": 26476544, 00:20:42.122 "uuid": "d579df28-ef85-4e9f-a2ef-8d3d3a18b38e", 00:20:42.122 "assigned_rate_limits": { 00:20:42.122 "rw_ios_per_sec": 0, 00:20:42.122 "rw_mbytes_per_sec": 0, 00:20:42.122 "r_mbytes_per_sec": 0, 00:20:42.122 "w_mbytes_per_sec": 0 00:20:42.122 }, 00:20:42.122 "claimed": false, 00:20:42.122 "zoned": false, 00:20:42.122 "supported_io_types": { 00:20:42.122 "read": true, 00:20:42.122 "write": true, 00:20:42.122 "unmap": true, 00:20:42.122 "flush": false, 00:20:42.122 "reset": true, 00:20:42.122 "nvme_admin": false, 00:20:42.122 "nvme_io": false, 00:20:42.122 "nvme_io_md": false, 00:20:42.122 "write_zeroes": true, 00:20:42.122 "zcopy": false, 00:20:42.122 "get_zone_info": false, 00:20:42.122 "zone_management": false, 00:20:42.122 "zone_append": false, 00:20:42.122 "compare": false, 00:20:42.122 "compare_and_write": false, 00:20:42.122 "abort": false, 00:20:42.122 "seek_hole": true, 00:20:42.122 "seek_data": true, 00:20:42.122 "copy": false, 00:20:42.122 "nvme_iov_md": false 00:20:42.122 }, 00:20:42.122 "driver_specific": { 00:20:42.122 "lvol": { 00:20:42.122 "lvol_store_uuid": "d8fc6c8c-b86c-43bf-9465-75e8466eb1b2", 00:20:42.122 "base_bdev": "nvme0n1", 00:20:42.122 "thin_provision": true, 00:20:42.122 "num_allocated_clusters": 0, 00:20:42.122 "snapshot": false, 00:20:42.122 "clone": false, 00:20:42.122 "esnap_clone": false 00:20:42.122 } 00:20:42.122 } 00:20:42.122 } 00:20:42.122 ]' 00:20:42.122 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:42.122 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:42.122 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:42.122 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:42.122 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:42.122 12:24:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:42.122 12:24:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:20:42.123 12:24:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d579df28-ef85-4e9f-a2ef-8d3d3a18b38e -c nvc0n1p0 --l2p_dram_limit 20 00:20:42.384 [2024-12-05 12:24:13.010793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.384 [2024-12-05 12:24:13.010865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:42.384 [2024-12-05 12:24:13.010884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:42.384 [2024-12-05 12:24:13.010896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.384 [2024-12-05 12:24:13.010972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.384 [2024-12-05 12:24:13.010988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.384 [2024-12-05 12:24:13.010998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:42.384 [2024-12-05 12:24:13.011011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.384 [2024-12-05 12:24:13.011030] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:42.384 [2024-12-05 12:24:13.011848] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:42.384 [2024-12-05 12:24:13.011869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.384 [2024-12-05 12:24:13.011882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.384 [2024-12-05 12:24:13.011893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:20:42.384 [2024-12-05 12:24:13.011905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.384 [2024-12-05 12:24:13.011940] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID efbc438d-1f7a-404a-be46-236e851945ae 00:20:42.384 [2024-12-05 12:24:13.014202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.384 [2024-12-05 12:24:13.014480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:42.384 [2024-12-05 12:24:13.014517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:42.384 [2024-12-05 12:24:13.014527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.384 [2024-12-05 12:24:13.027040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.384 [2024-12-05 12:24:13.027206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.384 [2024-12-05 12:24:13.027278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.294 ms 00:20:42.384 [2024-12-05 12:24:13.027307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.384 [2024-12-05 12:24:13.027431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.384 [2024-12-05 12:24:13.027457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.384 [2024-12-05 12:24:13.027510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:42.384 [2024-12-05 12:24:13.027530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.384 [2024-12-05 12:24:13.027610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.384 [2024-12-05 12:24:13.027807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:42.384 [2024-12-05 12:24:13.027836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:42.384 [2024-12-05 12:24:13.027857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.384 [2024-12-05 12:24:13.027904] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.384 [2024-12-05 12:24:13.032895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.384 [2024-12-05 12:24:13.033056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.384 [2024-12-05 12:24:13.033118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.006 ms 00:20:42.384 [2024-12-05 12:24:13.033150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.384 [2024-12-05 12:24:13.033213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.384 [2024-12-05 12:24:13.033253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:42.384 [2024-12-05 12:24:13.033275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:42.384 [2024-12-05 12:24:13.033297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.384 [2024-12-05 12:24:13.033342] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:42.384 [2024-12-05 12:24:13.033537] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:42.384 [2024-12-05 12:24:13.033578] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:42.384 [2024-12-05 12:24:13.033616] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:42.384 [2024-12-05 12:24:13.033650] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:42.384 [2024-12-05 12:24:13.033683] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:42.384 [2024-12-05 12:24:13.033784] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:42.384 [2024-12-05 12:24:13.033810] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:42.384 [2024-12-05 12:24:13.033830] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:42.384 [2024-12-05 12:24:13.033899] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:42.384 [2024-12-05 12:24:13.033927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.384 [2024-12-05 12:24:13.033951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:42.384 [2024-12-05 12:24:13.033972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:20:42.384 [2024-12-05 12:24:13.034027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.384 [2024-12-05 12:24:13.034735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.384 [2024-12-05 12:24:13.034804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:42.384 [2024-12-05 12:24:13.035184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:42.384 [2024-12-05 12:24:13.035247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.384 [2024-12-05 12:24:13.035427] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:42.384 [2024-12-05 12:24:13.035519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:42.384 [2024-12-05 12:24:13.035549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.384 [2024-12-05 12:24:13.035603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.384 [2024-12-05 12:24:13.035628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:42.384 [2024-12-05 12:24:13.035649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:42.384 [2024-12-05 12:24:13.035669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:42.384 [2024-12-05 12:24:13.035726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:42.384 [2024-12-05 12:24:13.035749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:42.384 [2024-12-05 12:24:13.035771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.384 [2024-12-05 12:24:13.035790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:42.384 [2024-12-05 12:24:13.035822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:42.384 [2024-12-05 12:24:13.036145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.384 [2024-12-05 12:24:13.036209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:42.384 [2024-12-05 12:24:13.036424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:42.384 [2024-12-05 12:24:13.036444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.384 [2024-12-05 12:24:13.036453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:42.384 [2024-12-05 12:24:13.036478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:42.384 [2024-12-05 12:24:13.036486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.384 [2024-12-05 12:24:13.036498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:42.384 [2024-12-05 12:24:13.036506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:42.384 [2024-12-05 12:24:13.036516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.384 [2024-12-05 12:24:13.036523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:42.384 [2024-12-05 12:24:13.036533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:42.384 [2024-12-05 12:24:13.036541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.384 [2024-12-05 12:24:13.036550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:42.384 [2024-12-05 12:24:13.036557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:42.384 [2024-12-05 12:24:13.036566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.384 [2024-12-05 12:24:13.036572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:42.384 [2024-12-05 12:24:13.036582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:42.384 [2024-12-05 12:24:13.036589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.384 [2024-12-05 12:24:13.036601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:42.384 [2024-12-05 12:24:13.036608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:42.384 [2024-12-05 12:24:13.036617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.384 [2024-12-05 12:24:13.036624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:42.384 [2024-12-05 12:24:13.036633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:42.384 [2024-12-05 12:24:13.036640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.384 [2024-12-05 12:24:13.036648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:42.384 [2024-12-05 12:24:13.036654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:42.384 [2024-12-05 12:24:13.036663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.384 [2024-12-05 12:24:13.036669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:42.384 [2024-12-05 12:24:13.036678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:42.384 [2024-12-05 12:24:13.036685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.384 [2024-12-05 12:24:13.036694] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:42.384 [2024-12-05 12:24:13.036703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:42.385 [2024-12-05 12:24:13.036712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.385 [2024-12-05 12:24:13.036720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.385 [2024-12-05 12:24:13.036733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:42.385 [2024-12-05 12:24:13.036740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:42.385 [2024-12-05 12:24:13.036749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:42.385 [2024-12-05 12:24:13.036757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:42.385 [2024-12-05 12:24:13.036766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:42.385 [2024-12-05 12:24:13.036777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:42.385 [2024-12-05 12:24:13.036794] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:42.385 [2024-12-05 12:24:13.036807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.385 [2024-12-05 12:24:13.036819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:42.385 [2024-12-05 12:24:13.036828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:42.385 [2024-12-05 12:24:13.036839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:42.385 [2024-12-05 12:24:13.036846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:42.385 [2024-12-05 12:24:13.036855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:42.385 [2024-12-05 12:24:13.036862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:42.385 [2024-12-05 12:24:13.036872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:42.385 [2024-12-05 12:24:13.036880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:42.385 [2024-12-05 12:24:13.036892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:42.385 [2024-12-05 12:24:13.036899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:42.385 [2024-12-05 12:24:13.036909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:42.385 [2024-12-05 12:24:13.036917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:42.385 [2024-12-05 12:24:13.036927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:42.385 [2024-12-05 12:24:13.036936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:42.385 [2024-12-05 12:24:13.036945] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:42.385 [2024-12-05 12:24:13.036954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.385 [2024-12-05 12:24:13.036969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:42.385 [2024-12-05 12:24:13.036977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:42.385 [2024-12-05 12:24:13.036986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:42.385 [2024-12-05 12:24:13.036994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:42.385 [2024-12-05 12:24:13.037006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.385 [2024-12-05 12:24:13.037015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:42.385 [2024-12-05 12:24:13.037027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:20:42.385 [2024-12-05 12:24:13.037034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.385 [2024-12-05 12:24:13.037095] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:42.385 [2024-12-05 12:24:13.037107] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:46.586 [2024-12-05 12:24:16.993521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:16.993899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:46.586 [2024-12-05 12:24:16.994147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3956.401 ms 00:20:46.586 [2024-12-05 12:24:16.994195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.032794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.033017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:46.586 [2024-12-05 12:24:17.033456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.277 ms 00:20:46.586 [2024-12-05 12:24:17.033524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.033715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.033959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:46.586 [2024-12-05 12:24:17.033994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:46.586 [2024-12-05 12:24:17.034016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.083920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.084133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:46.586 [2024-12-05 12:24:17.084443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.836 ms 00:20:46.586 [2024-12-05 12:24:17.084513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.084787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.084815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:46.586 [2024-12-05 12:24:17.084831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:46.586 [2024-12-05 12:24:17.084844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.085875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.085938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:46.586 [2024-12-05 12:24:17.086043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:20:46.586 [2024-12-05 12:24:17.086069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.086221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.086294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:46.586 [2024-12-05 12:24:17.086328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:20:46.586 [2024-12-05 12:24:17.086350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.104912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.105092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:46.586 [2024-12-05 12:24:17.105171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.523 ms 00:20:46.586 [2024-12-05 12:24:17.105196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.120168] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:20:46.586 [2024-12-05 12:24:17.129749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.129803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:46.586 [2024-12-05 12:24:17.129815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.430 ms 00:20:46.586 [2024-12-05 12:24:17.129826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.231600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.231691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:46.586 [2024-12-05 12:24:17.231708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.742 ms 00:20:46.586 [2024-12-05 12:24:17.231721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.231948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.231966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:46.586 [2024-12-05 12:24:17.231976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:20:46.586 [2024-12-05 12:24:17.231990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.258773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.258976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:46.586 [2024-12-05 12:24:17.259001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.728 ms 00:20:46.586 [2024-12-05 12:24:17.259013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.284992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.285051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:46.586 [2024-12-05 12:24:17.285065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.934 ms 00:20:46.586 [2024-12-05 12:24:17.285076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.285808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.285836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:46.586 [2024-12-05 12:24:17.285848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:20:46.586 [2024-12-05 12:24:17.285859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.378415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.378497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:46.586 [2024-12-05 12:24:17.378513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.481 ms 00:20:46.586 [2024-12-05 12:24:17.378525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.407899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.407958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:46.586 [2024-12-05 12:24:17.407976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.276 ms 00:20:46.586 [2024-12-05 12:24:17.407988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.586 [2024-12-05 12:24:17.435172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.586 [2024-12-05 12:24:17.435231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:46.586 [2024-12-05 12:24:17.435244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.132 ms 00:20:46.586 [2024-12-05 12:24:17.435254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.848 [2024-12-05 12:24:17.462771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.848 [2024-12-05 12:24:17.462829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:46.848 [2024-12-05 12:24:17.462843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.464 ms 00:20:46.848 [2024-12-05 12:24:17.462853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.848 [2024-12-05 12:24:17.462911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.848 [2024-12-05 12:24:17.462928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:46.848 [2024-12-05 12:24:17.462937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:46.848 [2024-12-05 12:24:17.462948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.848 [2024-12-05 12:24:17.463055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.848 [2024-12-05 12:24:17.463068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:46.848 [2024-12-05 12:24:17.463077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:46.848 [2024-12-05 12:24:17.463090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.848 [2024-12-05 12:24:17.464557] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4453.158 ms, result 0 00:20:46.848 { 00:20:46.848 "name": "ftl0", 00:20:46.848 "uuid": "efbc438d-1f7a-404a-be46-236e851945ae" 00:20:46.848 } 00:20:46.848 12:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:20:46.848 12:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:20:46.848 12:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:20:46.848 12:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:20:47.110 [2024-12-05 12:24:17.812556] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:47.110 I/O size of 69632 is greater than zero copy threshold (65536). 00:20:47.110 Zero copy mechanism will not be used. 00:20:47.110 Running I/O for 4 seconds... 00:20:49.000 647.00 IOPS, 42.96 MiB/s [2024-12-05T12:24:21.252Z] 652.00 IOPS, 43.30 MiB/s [2024-12-05T12:24:22.189Z] 665.33 IOPS, 44.18 MiB/s [2024-12-05T12:24:22.189Z] 727.00 IOPS, 48.28 MiB/s 00:20:51.320 Latency(us) 00:20:51.320 [2024-12-05T12:24:22.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.320 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:20:51.320 ftl0 : 4.00 726.99 48.28 0.00 0.00 1450.47 236.31 3049.94 00:20:51.320 [2024-12-05T12:24:22.189Z] =================================================================================================================== 00:20:51.320 [2024-12-05T12:24:22.189Z] Total : 726.99 48.28 0.00 0.00 1450.47 236.31 3049.94 00:20:51.320 [2024-12-05 12:24:21.823136] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:51.320 { 00:20:51.320 "results": [ 00:20:51.320 { 00:20:51.320 "job": "ftl0", 00:20:51.320 "core_mask": "0x1", 00:20:51.320 "workload": "randwrite", 00:20:51.320 "status": "finished", 00:20:51.320 "queue_depth": 1, 00:20:51.320 "io_size": 69632, 00:20:51.320 "runtime": 4.001431, 00:20:51.320 "iops": 726.9899193563502, 00:20:51.320 "mibps": 48.276674332257635, 00:20:51.320 "io_failed": 0, 00:20:51.320 "io_timeout": 0, 00:20:51.320 "avg_latency_us": 1450.4748658010947, 00:20:51.320 "min_latency_us": 236.30769230769232, 00:20:51.320 "max_latency_us": 3049.944615384615 00:20:51.320 } 00:20:51.320 ], 00:20:51.320 "core_count": 1 00:20:51.320 } 00:20:51.320 12:24:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:20:51.320 [2024-12-05 12:24:21.928195] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:51.320 Running I/O for 4 seconds... 00:20:53.195 7662.00 IOPS, 29.93 MiB/s [2024-12-05T12:24:25.006Z] 6622.00 IOPS, 25.87 MiB/s [2024-12-05T12:24:25.946Z] 6058.33 IOPS, 23.67 MiB/s [2024-12-05T12:24:26.207Z] 5852.75 IOPS, 22.86 MiB/s 00:20:55.338 Latency(us) 00:20:55.338 [2024-12-05T12:24:26.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.338 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:20:55.338 ftl0 : 4.03 5843.85 22.83 0.00 0.00 21836.80 422.20 44766.13 00:20:55.338 [2024-12-05T12:24:26.207Z] =================================================================================================================== 00:20:55.338 [2024-12-05T12:24:26.207Z] Total : 5843.85 22.83 0.00 0.00 21836.80 0.00 44766.13 00:20:55.338 [2024-12-05 12:24:25.964229] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:55.338 { 00:20:55.338 "results": [ 00:20:55.338 { 00:20:55.338 "job": "ftl0", 00:20:55.338 "core_mask": "0x1", 00:20:55.338 "workload": "randwrite", 00:20:55.338 "status": "finished", 00:20:55.338 "queue_depth": 128, 00:20:55.338 "io_size": 4096, 00:20:55.338 "runtime": 4.027313, 00:20:55.338 "iops": 5843.8467534060555, 00:20:55.338 "mibps": 22.827526380492404, 00:20:55.338 "io_failed": 0, 00:20:55.338 "io_timeout": 0, 00:20:55.338 "avg_latency_us": 21836.803095880114, 00:20:55.338 "min_latency_us": 422.20307692307694, 00:20:55.338 "max_latency_us": 44766.12923076923 00:20:55.338 } 00:20:55.338 ], 00:20:55.338 "core_count": 1 00:20:55.338 } 00:20:55.338 12:24:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:20:55.338 [2024-12-05 12:24:26.083002] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:55.338 Running I/O for 4 seconds... 00:20:57.234 4644.00 IOPS, 18.14 MiB/s [2024-12-05T12:24:29.487Z] 4690.50 IOPS, 18.32 MiB/s [2024-12-05T12:24:30.431Z] 4638.33 IOPS, 18.12 MiB/s [2024-12-05T12:24:30.431Z] 4650.00 IOPS, 18.16 MiB/s 00:20:59.562 Latency(us) 00:20:59.562 [2024-12-05T12:24:30.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.562 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:59.562 Verification LBA range: start 0x0 length 0x1400000 00:20:59.562 ftl0 : 4.02 4661.44 18.21 0.00 0.00 27371.66 368.64 41338.09 00:20:59.562 [2024-12-05T12:24:30.431Z] =================================================================================================================== 00:20:59.562 [2024-12-05T12:24:30.431Z] Total : 4661.44 18.21 0.00 0.00 27371.66 0.00 41338.09 00:20:59.562 { 00:20:59.562 "results": [ 00:20:59.562 { 00:20:59.562 "job": "ftl0", 00:20:59.562 "core_mask": "0x1", 00:20:59.562 "workload": "verify", 00:20:59.562 "status": "finished", 00:20:59.562 "verify_range": { 00:20:59.562 "start": 0, 00:20:59.562 "length": 20971520 00:20:59.562 }, 00:20:59.562 "queue_depth": 128, 00:20:59.562 "io_size": 4096, 00:20:59.562 "runtime": 4.017644, 00:20:59.562 "iops": 4661.438395238602, 00:20:59.562 "mibps": 18.20874373140079, 00:20:59.562 "io_failed": 0, 00:20:59.562 "io_timeout": 0, 00:20:59.562 "avg_latency_us": 27371.661067919693, 00:20:59.562 "min_latency_us": 368.64, 00:20:59.562 "max_latency_us": 41338.092307692306 00:20:59.562 } 00:20:59.562 ], 00:20:59.562 "core_count": 1 00:20:59.562 } 00:20:59.562 [2024-12-05 12:24:30.118961] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:59.562 12:24:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:20:59.562 [2024-12-05 12:24:30.334770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.562 [2024-12-05 12:24:30.334833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:59.562 [2024-12-05 12:24:30.334847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:59.562 [2024-12-05 12:24:30.334861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.562 [2024-12-05 12:24:30.334888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:59.562 [2024-12-05 12:24:30.338206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.562 [2024-12-05 12:24:30.338247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:59.562 [2024-12-05 12:24:30.338264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.296 ms 00:20:59.562 [2024-12-05 12:24:30.338273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.562 [2024-12-05 12:24:30.341409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.562 [2024-12-05 12:24:30.341647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:59.562 [2024-12-05 12:24:30.341679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.102 ms 00:20:59.562 [2024-12-05 12:24:30.341689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.825 [2024-12-05 12:24:30.569213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.825 [2024-12-05 12:24:30.569417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:59.825 [2024-12-05 12:24:30.569449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 227.495 ms 00:20:59.825 [2024-12-05 12:24:30.569458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.825 [2024-12-05 12:24:30.575692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.825 [2024-12-05 12:24:30.575851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:59.825 [2024-12-05 12:24:30.575877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.165 ms 00:20:59.825 [2024-12-05 12:24:30.575891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.825 [2024-12-05 12:24:30.602834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.825 [2024-12-05 12:24:30.603019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:59.825 [2024-12-05 12:24:30.603048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.866 ms 00:20:59.825 [2024-12-05 12:24:30.603058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.825 [2024-12-05 12:24:30.622270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.825 [2024-12-05 12:24:30.622455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:59.825 [2024-12-05 12:24:30.622495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.104 ms 00:20:59.825 [2024-12-05 12:24:30.622505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.825 [2024-12-05 12:24:30.622686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.825 [2024-12-05 12:24:30.622699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:59.825 [2024-12-05 12:24:30.622715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:20:59.825 [2024-12-05 12:24:30.622724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.825 [2024-12-05 12:24:30.648407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.825 [2024-12-05 12:24:30.648456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:59.825 [2024-12-05 12:24:30.648493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.663 ms 00:20:59.825 [2024-12-05 12:24:30.648501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.825 [2024-12-05 12:24:30.674129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.825 [2024-12-05 12:24:30.674304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:59.825 [2024-12-05 12:24:30.674330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.577 ms 00:20:59.825 [2024-12-05 12:24:30.674337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.088 [2024-12-05 12:24:30.699126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.088 [2024-12-05 12:24:30.699170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:00.088 [2024-12-05 12:24:30.699186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.747 ms 00:21:00.088 [2024-12-05 12:24:30.699194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.088 [2024-12-05 12:24:30.723709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.088 [2024-12-05 12:24:30.723754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:00.088 [2024-12-05 12:24:30.723773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.408 ms 00:21:00.088 [2024-12-05 12:24:30.723780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.088 [2024-12-05 12:24:30.723828] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:00.088 [2024-12-05 12:24:30.723844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:00.088 [2024-12-05 12:24:30.723858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:00.088 [2024-12-05 12:24:30.723867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:00.088 [2024-12-05 12:24:30.723877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:00.088 [2024-12-05 12:24:30.723886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:00.088 [2024-12-05 12:24:30.723896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:00.088 [2024-12-05 12:24:30.723905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:00.088 [2024-12-05 12:24:30.723914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:00.088 [2024-12-05 12:24:30.723922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:00.088 [2024-12-05 12:24:30.723933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:00.088 [2024-12-05 12:24:30.723940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:00.088 [2024-12-05 12:24:30.723951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:00.088 [2024-12-05 12:24:30.723958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:00.088 [2024-12-05 12:24:30.723971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:00.088 [2024-12-05 12:24:30.723979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.723989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.723997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:00.089 [2024-12-05 12:24:30.724906] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:00.089 [2024-12-05 12:24:30.724918] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: efbc438d-1f7a-404a-be46-236e851945ae 00:21:00.089 [2024-12-05 12:24:30.724930] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:00.089 [2024-12-05 12:24:30.724940] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:00.089 [2024-12-05 12:24:30.724948] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:00.089 [2024-12-05 12:24:30.724959] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:00.090 [2024-12-05 12:24:30.724966] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:00.090 [2024-12-05 12:24:30.724977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:00.090 [2024-12-05 12:24:30.724986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:00.090 [2024-12-05 12:24:30.724997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:00.090 [2024-12-05 12:24:30.725003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:00.090 [2024-12-05 12:24:30.725014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.090 [2024-12-05 12:24:30.725022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:00.090 [2024-12-05 12:24:30.725034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:21:00.090 [2024-12-05 12:24:30.725042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.739768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.090 [2024-12-05 12:24:30.739941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:00.090 [2024-12-05 12:24:30.739966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.660 ms 00:21:00.090 [2024-12-05 12:24:30.739975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.740422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.090 [2024-12-05 12:24:30.740441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:00.090 [2024-12-05 12:24:30.740455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:21:00.090 [2024-12-05 12:24:30.740486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.782590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.090 [2024-12-05 12:24:30.782767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.090 [2024-12-05 12:24:30.782796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.090 [2024-12-05 12:24:30.782806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.782880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.090 [2024-12-05 12:24:30.782891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.090 [2024-12-05 12:24:30.782902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.090 [2024-12-05 12:24:30.782911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.783011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.090 [2024-12-05 12:24:30.783022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.090 [2024-12-05 12:24:30.783033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.090 [2024-12-05 12:24:30.783041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.783060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.090 [2024-12-05 12:24:30.783069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.090 [2024-12-05 12:24:30.783081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.090 [2024-12-05 12:24:30.783089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.874069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.090 [2024-12-05 12:24:30.874293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.090 [2024-12-05 12:24:30.874325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.090 [2024-12-05 12:24:30.874335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.948572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.090 [2024-12-05 12:24:30.948773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.090 [2024-12-05 12:24:30.948800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.090 [2024-12-05 12:24:30.948810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.948959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.090 [2024-12-05 12:24:30.948972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.090 [2024-12-05 12:24:30.948983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.090 [2024-12-05 12:24:30.948992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.949040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.090 [2024-12-05 12:24:30.949051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.090 [2024-12-05 12:24:30.949063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.090 [2024-12-05 12:24:30.949072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.949195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.090 [2024-12-05 12:24:30.949209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.090 [2024-12-05 12:24:30.949237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.090 [2024-12-05 12:24:30.949247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.949288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.090 [2024-12-05 12:24:30.949299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:00.090 [2024-12-05 12:24:30.949310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.090 [2024-12-05 12:24:30.949319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.949373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.090 [2024-12-05 12:24:30.949387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.090 [2024-12-05 12:24:30.949398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.090 [2024-12-05 12:24:30.949416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.949511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.090 [2024-12-05 12:24:30.949525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.090 [2024-12-05 12:24:30.949537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.090 [2024-12-05 12:24:30.949546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.090 [2024-12-05 12:24:30.949728] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 614.894 ms, result 0 00:21:00.351 true 00:21:00.351 12:24:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76371 00:21:00.351 12:24:30 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76371 ']' 00:21:00.351 12:24:30 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76371 00:21:00.351 12:24:30 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:21:00.351 12:24:30 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:00.351 12:24:30 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76371 00:21:00.351 12:24:31 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:00.351 12:24:31 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:00.351 12:24:31 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76371' 00:21:00.351 killing process with pid 76371 00:21:00.351 12:24:31 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76371 00:21:00.351 Received shutdown signal, test time was about 4.000000 seconds 00:21:00.351 00:21:00.351 Latency(us) 00:21:00.351 [2024-12-05T12:24:31.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.351 [2024-12-05T12:24:31.220Z] =================================================================================================================== 00:21:00.351 [2024-12-05T12:24:31.220Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.351 12:24:31 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76371 00:21:01.291 Remove shared memory files 00:21:01.291 12:24:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:01.291 12:24:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:21:01.291 12:24:31 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:01.291 12:24:31 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:01.291 12:24:31 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:01.291 12:24:31 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:01.291 12:24:31 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:01.291 12:24:31 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:01.291 ************************************ 00:21:01.291 END TEST ftl_bdevperf 00:21:01.291 ************************************ 00:21:01.291 00:21:01.291 real 0m23.082s 00:21:01.291 user 0m25.720s 00:21:01.291 sys 0m1.082s 00:21:01.291 12:24:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:01.291 12:24:31 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:01.291 12:24:31 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:01.291 12:24:31 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:01.292 12:24:31 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:01.292 12:24:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:01.292 ************************************ 00:21:01.292 START TEST ftl_trim 00:21:01.292 ************************************ 00:21:01.292 12:24:31 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:01.292 * Looking for test storage... 00:21:01.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:01.292 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:01.292 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:21:01.292 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:01.292 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:01.292 12:24:32 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:21:01.292 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:01.292 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:01.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.292 --rc genhtml_branch_coverage=1 00:21:01.292 --rc genhtml_function_coverage=1 00:21:01.292 --rc genhtml_legend=1 00:21:01.292 --rc geninfo_all_blocks=1 00:21:01.292 --rc geninfo_unexecuted_blocks=1 00:21:01.292 00:21:01.292 ' 00:21:01.292 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:01.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.292 --rc genhtml_branch_coverage=1 00:21:01.292 --rc genhtml_function_coverage=1 00:21:01.292 --rc genhtml_legend=1 00:21:01.292 --rc geninfo_all_blocks=1 00:21:01.292 --rc geninfo_unexecuted_blocks=1 00:21:01.292 00:21:01.292 ' 00:21:01.292 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:01.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.292 --rc genhtml_branch_coverage=1 00:21:01.292 --rc genhtml_function_coverage=1 00:21:01.292 --rc genhtml_legend=1 00:21:01.292 --rc geninfo_all_blocks=1 00:21:01.292 --rc geninfo_unexecuted_blocks=1 00:21:01.292 00:21:01.292 ' 00:21:01.292 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:01.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.292 --rc genhtml_branch_coverage=1 00:21:01.292 --rc genhtml_function_coverage=1 00:21:01.292 --rc genhtml_legend=1 00:21:01.292 --rc geninfo_all_blocks=1 00:21:01.292 --rc geninfo_unexecuted_blocks=1 00:21:01.292 00:21:01.292 ' 00:21:01.292 12:24:32 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76724 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76724 00:21:01.552 12:24:32 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:01.552 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76724 ']' 00:21:01.552 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.552 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.552 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.552 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.552 12:24:32 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:01.552 [2024-12-05 12:24:32.266477] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:21:01.552 [2024-12-05 12:24:32.266756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76724 ] 00:21:01.811 [2024-12-05 12:24:32.433177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:01.811 [2024-12-05 12:24:32.565719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.811 [2024-12-05 12:24:32.565985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.811 [2024-12-05 12:24:32.566063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.427 12:24:33 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.427 12:24:33 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:02.427 12:24:33 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:02.427 12:24:33 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:02.427 12:24:33 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:02.427 12:24:33 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:02.427 12:24:33 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:02.427 12:24:33 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:02.685 12:24:33 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:02.685 12:24:33 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:02.685 12:24:33 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:02.685 12:24:33 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:02.685 12:24:33 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:02.685 12:24:33 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:02.685 12:24:33 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:02.685 12:24:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:02.943 12:24:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:02.943 { 00:21:02.943 "name": "nvme0n1", 00:21:02.943 "aliases": [ 00:21:02.943 "ed1f9977-89f0-4868-bc43-6f8820a14e9a" 00:21:02.943 ], 00:21:02.943 "product_name": "NVMe disk", 00:21:02.943 "block_size": 4096, 00:21:02.943 "num_blocks": 1310720, 00:21:02.943 "uuid": "ed1f9977-89f0-4868-bc43-6f8820a14e9a", 00:21:02.943 "numa_id": -1, 00:21:02.943 "assigned_rate_limits": { 00:21:02.943 "rw_ios_per_sec": 0, 00:21:02.943 "rw_mbytes_per_sec": 0, 00:21:02.943 "r_mbytes_per_sec": 0, 00:21:02.943 "w_mbytes_per_sec": 0 00:21:02.943 }, 00:21:02.943 "claimed": true, 00:21:02.943 "claim_type": "read_many_write_one", 00:21:02.943 "zoned": false, 00:21:02.943 "supported_io_types": { 00:21:02.943 "read": true, 00:21:02.943 "write": true, 00:21:02.943 "unmap": true, 00:21:02.943 "flush": true, 00:21:02.943 "reset": true, 00:21:02.943 "nvme_admin": true, 00:21:02.943 "nvme_io": true, 00:21:02.944 "nvme_io_md": false, 00:21:02.944 "write_zeroes": true, 00:21:02.944 "zcopy": false, 00:21:02.944 "get_zone_info": false, 00:21:02.944 "zone_management": false, 00:21:02.944 "zone_append": false, 00:21:02.944 "compare": true, 00:21:02.944 "compare_and_write": false, 00:21:02.944 "abort": true, 00:21:02.944 "seek_hole": false, 00:21:02.944 "seek_data": false, 00:21:02.944 "copy": true, 00:21:02.944 "nvme_iov_md": false 00:21:02.944 }, 00:21:02.944 "driver_specific": { 00:21:02.944 "nvme": [ 00:21:02.944 { 00:21:02.944 "pci_address": "0000:00:11.0", 00:21:02.944 "trid": { 00:21:02.944 "trtype": "PCIe", 00:21:02.944 "traddr": "0000:00:11.0" 00:21:02.944 }, 00:21:02.944 "ctrlr_data": { 00:21:02.944 "cntlid": 0, 00:21:02.944 "vendor_id": "0x1b36", 00:21:02.944 "model_number": "QEMU NVMe Ctrl", 00:21:02.944 "serial_number": "12341", 00:21:02.944 "firmware_revision": "8.0.0", 00:21:02.944 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:02.944 "oacs": { 00:21:02.944 "security": 0, 00:21:02.944 "format": 1, 00:21:02.944 "firmware": 0, 00:21:02.944 "ns_manage": 1 00:21:02.944 }, 00:21:02.944 "multi_ctrlr": false, 00:21:02.944 "ana_reporting": false 00:21:02.944 }, 00:21:02.944 "vs": { 00:21:02.944 "nvme_version": "1.4" 00:21:02.944 }, 00:21:02.944 "ns_data": { 00:21:02.944 "id": 1, 00:21:02.944 "can_share": false 00:21:02.944 } 00:21:02.944 } 00:21:02.944 ], 00:21:02.944 "mp_policy": "active_passive" 00:21:02.944 } 00:21:02.944 } 00:21:02.944 ]' 00:21:02.944 12:24:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:02.944 12:24:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:02.944 12:24:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:02.944 12:24:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:02.944 12:24:33 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:02.944 12:24:33 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:21:02.944 12:24:33 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:02.944 12:24:33 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:02.944 12:24:33 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:02.944 12:24:33 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:02.944 12:24:33 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:03.202 12:24:33 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=d8fc6c8c-b86c-43bf-9465-75e8466eb1b2 00:21:03.202 12:24:33 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:03.202 12:24:33 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d8fc6c8c-b86c-43bf-9465-75e8466eb1b2 00:21:03.466 12:24:34 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:03.750 12:24:34 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=20bcbf8b-f50d-49a9-93cf-88ac26ad90ab 00:21:03.750 12:24:34 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 20bcbf8b-f50d-49a9-93cf-88ac26ad90ab 00:21:03.750 12:24:34 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=f2b5c1d4-9751-4d1a-a12e-7c11a4079a23 00:21:03.750 12:24:34 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f2b5c1d4-9751-4d1a-a12e-7c11a4079a23 00:21:03.750 12:24:34 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:03.750 12:24:34 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:03.750 12:24:34 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=f2b5c1d4-9751-4d1a-a12e-7c11a4079a23 00:21:03.750 12:24:34 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:03.750 12:24:34 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size f2b5c1d4-9751-4d1a-a12e-7c11a4079a23 00:21:03.750 12:24:34 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f2b5c1d4-9751-4d1a-a12e-7c11a4079a23 00:21:03.750 12:24:34 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:03.750 12:24:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:03.750 12:24:34 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:03.750 12:24:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f2b5c1d4-9751-4d1a-a12e-7c11a4079a23 00:21:04.014 12:24:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:04.014 { 00:21:04.014 "name": "f2b5c1d4-9751-4d1a-a12e-7c11a4079a23", 00:21:04.014 "aliases": [ 00:21:04.014 "lvs/nvme0n1p0" 00:21:04.014 ], 00:21:04.014 "product_name": "Logical Volume", 00:21:04.014 "block_size": 4096, 00:21:04.014 "num_blocks": 26476544, 00:21:04.014 "uuid": "f2b5c1d4-9751-4d1a-a12e-7c11a4079a23", 00:21:04.014 "assigned_rate_limits": { 00:21:04.014 "rw_ios_per_sec": 0, 00:21:04.014 "rw_mbytes_per_sec": 0, 00:21:04.014 "r_mbytes_per_sec": 0, 00:21:04.014 "w_mbytes_per_sec": 0 00:21:04.014 }, 00:21:04.014 "claimed": false, 00:21:04.014 "zoned": false, 00:21:04.014 "supported_io_types": { 00:21:04.014 "read": true, 00:21:04.014 "write": true, 00:21:04.014 "unmap": true, 00:21:04.014 "flush": false, 00:21:04.014 "reset": true, 00:21:04.014 "nvme_admin": false, 00:21:04.014 "nvme_io": false, 00:21:04.014 "nvme_io_md": false, 00:21:04.014 "write_zeroes": true, 00:21:04.014 "zcopy": false, 00:21:04.014 "get_zone_info": false, 00:21:04.014 "zone_management": false, 00:21:04.014 "zone_append": false, 00:21:04.014 "compare": false, 00:21:04.014 "compare_and_write": false, 00:21:04.014 "abort": false, 00:21:04.014 "seek_hole": true, 00:21:04.014 "seek_data": true, 00:21:04.014 "copy": false, 00:21:04.014 "nvme_iov_md": false 00:21:04.015 }, 00:21:04.015 "driver_specific": { 00:21:04.015 "lvol": { 00:21:04.015 "lvol_store_uuid": "20bcbf8b-f50d-49a9-93cf-88ac26ad90ab", 00:21:04.015 "base_bdev": "nvme0n1", 00:21:04.015 "thin_provision": true, 00:21:04.015 "num_allocated_clusters": 0, 00:21:04.015 "snapshot": false, 00:21:04.015 "clone": false, 00:21:04.015 "esnap_clone": false 00:21:04.015 } 00:21:04.015 } 00:21:04.015 } 00:21:04.015 ]' 00:21:04.015 12:24:34 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:04.015 12:24:34 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:04.015 12:24:34 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:04.015 12:24:34 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:04.015 12:24:34 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:04.015 12:24:34 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:04.015 12:24:34 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:04.015 12:24:34 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:04.015 12:24:34 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:04.273 12:24:35 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:04.273 12:24:35 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:04.273 12:24:35 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size f2b5c1d4-9751-4d1a-a12e-7c11a4079a23 00:21:04.273 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f2b5c1d4-9751-4d1a-a12e-7c11a4079a23 00:21:04.273 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:04.273 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:04.273 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:04.273 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f2b5c1d4-9751-4d1a-a12e-7c11a4079a23 00:21:04.530 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:04.530 { 00:21:04.530 "name": "f2b5c1d4-9751-4d1a-a12e-7c11a4079a23", 00:21:04.530 "aliases": [ 00:21:04.530 "lvs/nvme0n1p0" 00:21:04.530 ], 00:21:04.530 "product_name": "Logical Volume", 00:21:04.530 "block_size": 4096, 00:21:04.530 "num_blocks": 26476544, 00:21:04.530 "uuid": "f2b5c1d4-9751-4d1a-a12e-7c11a4079a23", 00:21:04.530 "assigned_rate_limits": { 00:21:04.530 "rw_ios_per_sec": 0, 00:21:04.530 "rw_mbytes_per_sec": 0, 00:21:04.530 "r_mbytes_per_sec": 0, 00:21:04.530 "w_mbytes_per_sec": 0 00:21:04.530 }, 00:21:04.530 "claimed": false, 00:21:04.530 "zoned": false, 00:21:04.530 "supported_io_types": { 00:21:04.530 "read": true, 00:21:04.530 "write": true, 00:21:04.530 "unmap": true, 00:21:04.530 "flush": false, 00:21:04.530 "reset": true, 00:21:04.530 "nvme_admin": false, 00:21:04.530 "nvme_io": false, 00:21:04.530 "nvme_io_md": false, 00:21:04.530 "write_zeroes": true, 00:21:04.530 "zcopy": false, 00:21:04.530 "get_zone_info": false, 00:21:04.530 "zone_management": false, 00:21:04.530 "zone_append": false, 00:21:04.530 "compare": false, 00:21:04.530 "compare_and_write": false, 00:21:04.530 "abort": false, 00:21:04.530 "seek_hole": true, 00:21:04.530 "seek_data": true, 00:21:04.530 "copy": false, 00:21:04.530 "nvme_iov_md": false 00:21:04.530 }, 00:21:04.530 "driver_specific": { 00:21:04.530 "lvol": { 00:21:04.530 "lvol_store_uuid": "20bcbf8b-f50d-49a9-93cf-88ac26ad90ab", 00:21:04.530 "base_bdev": "nvme0n1", 00:21:04.530 "thin_provision": true, 00:21:04.530 "num_allocated_clusters": 0, 00:21:04.530 "snapshot": false, 00:21:04.530 "clone": false, 00:21:04.530 "esnap_clone": false 00:21:04.530 } 00:21:04.530 } 00:21:04.530 } 00:21:04.530 ]' 00:21:04.530 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:04.530 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:04.530 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:04.530 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:04.531 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:04.531 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:04.531 12:24:35 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:04.531 12:24:35 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:04.788 12:24:35 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:04.788 12:24:35 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:04.788 12:24:35 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size f2b5c1d4-9751-4d1a-a12e-7c11a4079a23 00:21:04.788 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f2b5c1d4-9751-4d1a-a12e-7c11a4079a23 00:21:04.788 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:04.788 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:04.788 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:04.788 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f2b5c1d4-9751-4d1a-a12e-7c11a4079a23 00:21:05.046 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:05.046 { 00:21:05.046 "name": "f2b5c1d4-9751-4d1a-a12e-7c11a4079a23", 00:21:05.046 "aliases": [ 00:21:05.046 "lvs/nvme0n1p0" 00:21:05.046 ], 00:21:05.046 "product_name": "Logical Volume", 00:21:05.046 "block_size": 4096, 00:21:05.046 "num_blocks": 26476544, 00:21:05.046 "uuid": "f2b5c1d4-9751-4d1a-a12e-7c11a4079a23", 00:21:05.046 "assigned_rate_limits": { 00:21:05.046 "rw_ios_per_sec": 0, 00:21:05.046 "rw_mbytes_per_sec": 0, 00:21:05.046 "r_mbytes_per_sec": 0, 00:21:05.046 "w_mbytes_per_sec": 0 00:21:05.046 }, 00:21:05.046 "claimed": false, 00:21:05.046 "zoned": false, 00:21:05.046 "supported_io_types": { 00:21:05.046 "read": true, 00:21:05.046 "write": true, 00:21:05.046 "unmap": true, 00:21:05.046 "flush": false, 00:21:05.046 "reset": true, 00:21:05.046 "nvme_admin": false, 00:21:05.046 "nvme_io": false, 00:21:05.046 "nvme_io_md": false, 00:21:05.046 "write_zeroes": true, 00:21:05.046 "zcopy": false, 00:21:05.046 "get_zone_info": false, 00:21:05.046 "zone_management": false, 00:21:05.046 "zone_append": false, 00:21:05.046 "compare": false, 00:21:05.046 "compare_and_write": false, 00:21:05.046 "abort": false, 00:21:05.046 "seek_hole": true, 00:21:05.046 "seek_data": true, 00:21:05.046 "copy": false, 00:21:05.046 "nvme_iov_md": false 00:21:05.046 }, 00:21:05.046 "driver_specific": { 00:21:05.046 "lvol": { 00:21:05.046 "lvol_store_uuid": "20bcbf8b-f50d-49a9-93cf-88ac26ad90ab", 00:21:05.046 "base_bdev": "nvme0n1", 00:21:05.046 "thin_provision": true, 00:21:05.046 "num_allocated_clusters": 0, 00:21:05.046 "snapshot": false, 00:21:05.046 "clone": false, 00:21:05.046 "esnap_clone": false 00:21:05.046 } 00:21:05.046 } 00:21:05.046 } 00:21:05.046 ]' 00:21:05.046 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:05.046 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:05.046 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:05.046 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:05.046 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:05.046 12:24:35 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:05.046 12:24:35 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:05.046 12:24:35 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f2b5c1d4-9751-4d1a-a12e-7c11a4079a23 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:05.305 [2024-12-05 12:24:36.011230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.305 [2024-12-05 12:24:36.011365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:05.305 [2024-12-05 12:24:36.011389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:05.305 [2024-12-05 12:24:36.011397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.305 [2024-12-05 12:24:36.013792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.305 [2024-12-05 12:24:36.013823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:05.305 [2024-12-05 12:24:36.013833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.364 ms 00:21:05.305 [2024-12-05 12:24:36.013840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.305 [2024-12-05 12:24:36.013928] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:05.305 [2024-12-05 12:24:36.014525] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:05.305 [2024-12-05 12:24:36.014551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.305 [2024-12-05 12:24:36.014558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:05.305 [2024-12-05 12:24:36.014567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:21:05.305 [2024-12-05 12:24:36.014573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.305 [2024-12-05 12:24:36.014656] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b202c494-44a1-46c8-8ff5-771af2981a3d 00:21:05.305 [2024-12-05 12:24:36.015934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.305 [2024-12-05 12:24:36.015965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:05.305 [2024-12-05 12:24:36.015974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:05.305 [2024-12-05 12:24:36.015982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.305 [2024-12-05 12:24:36.022855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.305 [2024-12-05 12:24:36.022881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:05.305 [2024-12-05 12:24:36.022890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.810 ms 00:21:05.305 [2024-12-05 12:24:36.022899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.305 [2024-12-05 12:24:36.023006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.305 [2024-12-05 12:24:36.023017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:05.305 [2024-12-05 12:24:36.023023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:05.305 [2024-12-05 12:24:36.023034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.305 [2024-12-05 12:24:36.023064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.305 [2024-12-05 12:24:36.023072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:05.305 [2024-12-05 12:24:36.023078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:05.305 [2024-12-05 12:24:36.023087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.305 [2024-12-05 12:24:36.023113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:05.305 [2024-12-05 12:24:36.026337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.305 [2024-12-05 12:24:36.026447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:05.305 [2024-12-05 12:24:36.026476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.226 ms 00:21:05.305 [2024-12-05 12:24:36.026484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.305 [2024-12-05 12:24:36.026524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.305 [2024-12-05 12:24:36.026545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:05.305 [2024-12-05 12:24:36.026553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:05.305 [2024-12-05 12:24:36.026560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.305 [2024-12-05 12:24:36.026586] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:05.305 [2024-12-05 12:24:36.026697] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:05.305 [2024-12-05 12:24:36.026710] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:05.305 [2024-12-05 12:24:36.026719] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:05.305 [2024-12-05 12:24:36.026728] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:05.305 [2024-12-05 12:24:36.026735] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:05.305 [2024-12-05 12:24:36.026743] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:05.305 [2024-12-05 12:24:36.026749] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:05.305 [2024-12-05 12:24:36.026756] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:05.305 [2024-12-05 12:24:36.026763] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:05.305 [2024-12-05 12:24:36.026771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.305 [2024-12-05 12:24:36.026776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:05.305 [2024-12-05 12:24:36.026783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:21:05.305 [2024-12-05 12:24:36.026789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.305 [2024-12-05 12:24:36.026865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.305 [2024-12-05 12:24:36.026872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:05.305 [2024-12-05 12:24:36.026879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:05.305 [2024-12-05 12:24:36.026884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.305 [2024-12-05 12:24:36.027003] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:05.305 [2024-12-05 12:24:36.027011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:05.305 [2024-12-05 12:24:36.027019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:05.305 [2024-12-05 12:24:36.027026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.305 [2024-12-05 12:24:36.027033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:05.305 [2024-12-05 12:24:36.027039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:05.305 [2024-12-05 12:24:36.027045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:05.305 [2024-12-05 12:24:36.027050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:05.305 [2024-12-05 12:24:36.027057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:05.305 [2024-12-05 12:24:36.027061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:05.305 [2024-12-05 12:24:36.027068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:05.305 [2024-12-05 12:24:36.027073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:05.305 [2024-12-05 12:24:36.027080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:05.305 [2024-12-05 12:24:36.027085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:05.305 [2024-12-05 12:24:36.027092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:05.305 [2024-12-05 12:24:36.027097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.305 [2024-12-05 12:24:36.027105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:05.305 [2024-12-05 12:24:36.027110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:05.305 [2024-12-05 12:24:36.027117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.305 [2024-12-05 12:24:36.027122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:05.305 [2024-12-05 12:24:36.027129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:05.305 [2024-12-05 12:24:36.027134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.306 [2024-12-05 12:24:36.027142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:05.306 [2024-12-05 12:24:36.027148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:05.306 [2024-12-05 12:24:36.027154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.306 [2024-12-05 12:24:36.027159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:05.306 [2024-12-05 12:24:36.027166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:05.306 [2024-12-05 12:24:36.027171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.306 [2024-12-05 12:24:36.027177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:05.306 [2024-12-05 12:24:36.027182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:05.306 [2024-12-05 12:24:36.027189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.306 [2024-12-05 12:24:36.027194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:05.306 [2024-12-05 12:24:36.027202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:05.306 [2024-12-05 12:24:36.027208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:05.306 [2024-12-05 12:24:36.027214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:05.306 [2024-12-05 12:24:36.027219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:05.306 [2024-12-05 12:24:36.027225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:05.306 [2024-12-05 12:24:36.027230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:05.306 [2024-12-05 12:24:36.027237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:05.306 [2024-12-05 12:24:36.027242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.306 [2024-12-05 12:24:36.027248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:05.306 [2024-12-05 12:24:36.027253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:05.306 [2024-12-05 12:24:36.027259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.306 [2024-12-05 12:24:36.027264] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:05.306 [2024-12-05 12:24:36.027271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:05.306 [2024-12-05 12:24:36.027276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:05.306 [2024-12-05 12:24:36.027284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.306 [2024-12-05 12:24:36.027290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:05.306 [2024-12-05 12:24:36.027299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:05.306 [2024-12-05 12:24:36.027304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:05.306 [2024-12-05 12:24:36.027311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:05.306 [2024-12-05 12:24:36.027316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:05.306 [2024-12-05 12:24:36.027323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:05.306 [2024-12-05 12:24:36.027331] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:05.306 [2024-12-05 12:24:36.027341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:05.306 [2024-12-05 12:24:36.027349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:05.306 [2024-12-05 12:24:36.027356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:05.306 [2024-12-05 12:24:36.027361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:05.306 [2024-12-05 12:24:36.027369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:05.306 [2024-12-05 12:24:36.027374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:05.306 [2024-12-05 12:24:36.027381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:05.306 [2024-12-05 12:24:36.027387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:05.306 [2024-12-05 12:24:36.027393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:05.306 [2024-12-05 12:24:36.027398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:05.306 [2024-12-05 12:24:36.027407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:05.306 [2024-12-05 12:24:36.027412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:05.306 [2024-12-05 12:24:36.027419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:05.306 [2024-12-05 12:24:36.027424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:05.306 [2024-12-05 12:24:36.027431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:05.306 [2024-12-05 12:24:36.027437] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:05.306 [2024-12-05 12:24:36.027448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:05.306 [2024-12-05 12:24:36.027454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:05.306 [2024-12-05 12:24:36.027471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:05.306 [2024-12-05 12:24:36.027477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:05.306 [2024-12-05 12:24:36.027490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:05.306 [2024-12-05 12:24:36.027497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.306 [2024-12-05 12:24:36.027505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:05.306 [2024-12-05 12:24:36.027510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:21:05.306 [2024-12-05 12:24:36.027517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.306 [2024-12-05 12:24:36.027577] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:05.306 [2024-12-05 12:24:36.027587] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:07.836 [2024-12-05 12:24:38.443457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.836 [2024-12-05 12:24:38.443677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:07.836 [2024-12-05 12:24:38.443701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2415.869 ms 00:21:07.836 [2024-12-05 12:24:38.443714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.836 [2024-12-05 12:24:38.471982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.836 [2024-12-05 12:24:38.472028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:07.836 [2024-12-05 12:24:38.472043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.025 ms 00:21:07.836 [2024-12-05 12:24:38.472053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.836 [2024-12-05 12:24:38.472204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.836 [2024-12-05 12:24:38.472217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:07.836 [2024-12-05 12:24:38.472242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:07.836 [2024-12-05 12:24:38.472254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.836 [2024-12-05 12:24:38.520201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.836 [2024-12-05 12:24:38.520244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:07.836 [2024-12-05 12:24:38.520257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.912 ms 00:21:07.836 [2024-12-05 12:24:38.520269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.836 [2024-12-05 12:24:38.520347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.836 [2024-12-05 12:24:38.520361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:07.836 [2024-12-05 12:24:38.520369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:07.836 [2024-12-05 12:24:38.520379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.836 [2024-12-05 12:24:38.520807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.836 [2024-12-05 12:24:38.520829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:07.836 [2024-12-05 12:24:38.520838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:21:07.836 [2024-12-05 12:24:38.520848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.836 [2024-12-05 12:24:38.520963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.836 [2024-12-05 12:24:38.520974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:07.836 [2024-12-05 12:24:38.520997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:21:07.836 [2024-12-05 12:24:38.521009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.836 [2024-12-05 12:24:38.536870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.836 [2024-12-05 12:24:38.536902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:07.836 [2024-12-05 12:24:38.536912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.828 ms 00:21:07.836 [2024-12-05 12:24:38.536922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.836 [2024-12-05 12:24:38.549836] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:07.836 [2024-12-05 12:24:38.567283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.836 [2024-12-05 12:24:38.567317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:07.836 [2024-12-05 12:24:38.567330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.263 ms 00:21:07.836 [2024-12-05 12:24:38.567338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.836 [2024-12-05 12:24:38.636401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.836 [2024-12-05 12:24:38.636440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:07.836 [2024-12-05 12:24:38.636454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.991 ms 00:21:07.836 [2024-12-05 12:24:38.636485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.836 [2024-12-05 12:24:38.636699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.836 [2024-12-05 12:24:38.636710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:07.836 [2024-12-05 12:24:38.636724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:21:07.836 [2024-12-05 12:24:38.636732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.836 [2024-12-05 12:24:38.659810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.836 [2024-12-05 12:24:38.659950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:07.837 [2024-12-05 12:24:38.659972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.040 ms 00:21:07.837 [2024-12-05 12:24:38.659980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.837 [2024-12-05 12:24:38.682196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.837 [2024-12-05 12:24:38.682226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:07.837 [2024-12-05 12:24:38.682239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.158 ms 00:21:07.837 [2024-12-05 12:24:38.682246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.837 [2024-12-05 12:24:38.682873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.837 [2024-12-05 12:24:38.682892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:07.837 [2024-12-05 12:24:38.682903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:21:07.837 [2024-12-05 12:24:38.682911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.095 [2024-12-05 12:24:38.751245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.095 [2024-12-05 12:24:38.751361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:08.095 [2024-12-05 12:24:38.751382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.304 ms 00:21:08.096 [2024-12-05 12:24:38.751391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.096 [2024-12-05 12:24:38.776102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.096 [2024-12-05 12:24:38.776133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:08.096 [2024-12-05 12:24:38.776146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.609 ms 00:21:08.096 [2024-12-05 12:24:38.776153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.096 [2024-12-05 12:24:38.798959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.096 [2024-12-05 12:24:38.798991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:08.096 [2024-12-05 12:24:38.799003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.744 ms 00:21:08.096 [2024-12-05 12:24:38.799011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.096 [2024-12-05 12:24:38.821826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.096 [2024-12-05 12:24:38.821984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:08.096 [2024-12-05 12:24:38.822003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.761 ms 00:21:08.096 [2024-12-05 12:24:38.822011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.096 [2024-12-05 12:24:38.822067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.096 [2024-12-05 12:24:38.822079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:08.096 [2024-12-05 12:24:38.822091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:08.096 [2024-12-05 12:24:38.822099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.096 [2024-12-05 12:24:38.822178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.096 [2024-12-05 12:24:38.822186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:08.096 [2024-12-05 12:24:38.822196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:08.096 [2024-12-05 12:24:38.822203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.096 [2024-12-05 12:24:38.823094] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:08.096 [2024-12-05 12:24:38.825955] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2811.570 ms, result 0 00:21:08.096 { 00:21:08.096 "name": "ftl0", 00:21:08.096 "uuid": "b202c494-44a1-46c8-8ff5-771af2981a3d" 00:21:08.096 } 00:21:08.096 [2024-12-05 12:24:38.826766] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:08.096 12:24:38 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:08.096 12:24:38 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:08.096 12:24:38 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:08.096 12:24:38 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:21:08.096 12:24:38 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:08.096 12:24:38 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:08.096 12:24:38 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:08.355 12:24:39 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:08.355 [ 00:21:08.355 { 00:21:08.355 "name": "ftl0", 00:21:08.355 "aliases": [ 00:21:08.355 "b202c494-44a1-46c8-8ff5-771af2981a3d" 00:21:08.355 ], 00:21:08.355 "product_name": "FTL disk", 00:21:08.355 "block_size": 4096, 00:21:08.355 "num_blocks": 23592960, 00:21:08.355 "uuid": "b202c494-44a1-46c8-8ff5-771af2981a3d", 00:21:08.355 "assigned_rate_limits": { 00:21:08.355 "rw_ios_per_sec": 0, 00:21:08.355 "rw_mbytes_per_sec": 0, 00:21:08.355 "r_mbytes_per_sec": 0, 00:21:08.355 "w_mbytes_per_sec": 0 00:21:08.355 }, 00:21:08.355 "claimed": false, 00:21:08.355 "zoned": false, 00:21:08.355 "supported_io_types": { 00:21:08.355 "read": true, 00:21:08.355 "write": true, 00:21:08.355 "unmap": true, 00:21:08.355 "flush": true, 00:21:08.355 "reset": false, 00:21:08.355 "nvme_admin": false, 00:21:08.355 "nvme_io": false, 00:21:08.355 "nvme_io_md": false, 00:21:08.355 "write_zeroes": true, 00:21:08.355 "zcopy": false, 00:21:08.355 "get_zone_info": false, 00:21:08.355 "zone_management": false, 00:21:08.355 "zone_append": false, 00:21:08.355 "compare": false, 00:21:08.355 "compare_and_write": false, 00:21:08.355 "abort": false, 00:21:08.355 "seek_hole": false, 00:21:08.355 "seek_data": false, 00:21:08.355 "copy": false, 00:21:08.355 "nvme_iov_md": false 00:21:08.355 }, 00:21:08.355 "driver_specific": { 00:21:08.355 "ftl": { 00:21:08.355 "base_bdev": "f2b5c1d4-9751-4d1a-a12e-7c11a4079a23", 00:21:08.355 "cache": "nvc0n1p0" 00:21:08.355 } 00:21:08.355 } 00:21:08.355 } 00:21:08.355 ] 00:21:08.613 12:24:39 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:21:08.613 12:24:39 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:08.613 12:24:39 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:08.613 12:24:39 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:08.613 12:24:39 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:08.872 12:24:39 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:08.872 { 00:21:08.872 "name": "ftl0", 00:21:08.872 "aliases": [ 00:21:08.872 "b202c494-44a1-46c8-8ff5-771af2981a3d" 00:21:08.872 ], 00:21:08.872 "product_name": "FTL disk", 00:21:08.872 "block_size": 4096, 00:21:08.872 "num_blocks": 23592960, 00:21:08.872 "uuid": "b202c494-44a1-46c8-8ff5-771af2981a3d", 00:21:08.872 "assigned_rate_limits": { 00:21:08.872 "rw_ios_per_sec": 0, 00:21:08.872 "rw_mbytes_per_sec": 0, 00:21:08.872 "r_mbytes_per_sec": 0, 00:21:08.872 "w_mbytes_per_sec": 0 00:21:08.872 }, 00:21:08.872 "claimed": false, 00:21:08.872 "zoned": false, 00:21:08.872 "supported_io_types": { 00:21:08.872 "read": true, 00:21:08.872 "write": true, 00:21:08.872 "unmap": true, 00:21:08.872 "flush": true, 00:21:08.872 "reset": false, 00:21:08.872 "nvme_admin": false, 00:21:08.872 "nvme_io": false, 00:21:08.872 "nvme_io_md": false, 00:21:08.872 "write_zeroes": true, 00:21:08.872 "zcopy": false, 00:21:08.872 "get_zone_info": false, 00:21:08.872 "zone_management": false, 00:21:08.872 "zone_append": false, 00:21:08.872 "compare": false, 00:21:08.872 "compare_and_write": false, 00:21:08.872 "abort": false, 00:21:08.872 "seek_hole": false, 00:21:08.872 "seek_data": false, 00:21:08.872 "copy": false, 00:21:08.872 "nvme_iov_md": false 00:21:08.872 }, 00:21:08.872 "driver_specific": { 00:21:08.872 "ftl": { 00:21:08.872 "base_bdev": "f2b5c1d4-9751-4d1a-a12e-7c11a4079a23", 00:21:08.872 "cache": "nvc0n1p0" 00:21:08.872 } 00:21:08.872 } 00:21:08.872 } 00:21:08.872 ]' 00:21:08.872 12:24:39 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:08.872 12:24:39 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:08.872 12:24:39 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:09.132 [2024-12-05 12:24:39.846102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.132 [2024-12-05 12:24:39.846146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:09.132 [2024-12-05 12:24:39.846162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:09.132 [2024-12-05 12:24:39.846175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.132 [2024-12-05 12:24:39.846212] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:09.132 [2024-12-05 12:24:39.848961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.132 [2024-12-05 12:24:39.849093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:09.132 [2024-12-05 12:24:39.849116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:21:09.132 [2024-12-05 12:24:39.849125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.132 [2024-12-05 12:24:39.849683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.132 [2024-12-05 12:24:39.849700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:09.132 [2024-12-05 12:24:39.849711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:21:09.132 [2024-12-05 12:24:39.849719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.132 [2024-12-05 12:24:39.853365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.132 [2024-12-05 12:24:39.853388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:09.132 [2024-12-05 12:24:39.853399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.612 ms 00:21:09.132 [2024-12-05 12:24:39.853408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.132 [2024-12-05 12:24:39.860416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.132 [2024-12-05 12:24:39.860536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:09.132 [2024-12-05 12:24:39.860556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.960 ms 00:21:09.132 [2024-12-05 12:24:39.860566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.132 [2024-12-05 12:24:39.884330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.132 [2024-12-05 12:24:39.884437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:09.132 [2024-12-05 12:24:39.884460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.680 ms 00:21:09.132 [2024-12-05 12:24:39.884484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.132 [2024-12-05 12:24:39.899684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.132 [2024-12-05 12:24:39.899793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:09.132 [2024-12-05 12:24:39.899846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.136 ms 00:21:09.132 [2024-12-05 12:24:39.899872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.132 [2024-12-05 12:24:39.900094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.132 [2024-12-05 12:24:39.900178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:09.132 [2024-12-05 12:24:39.900203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:21:09.132 [2024-12-05 12:24:39.900257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.132 [2024-12-05 12:24:39.923255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.132 [2024-12-05 12:24:39.923354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:09.132 [2024-12-05 12:24:39.923406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.948 ms 00:21:09.132 [2024-12-05 12:24:39.923427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.132 [2024-12-05 12:24:39.945973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.132 [2024-12-05 12:24:39.946069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:09.132 [2024-12-05 12:24:39.946122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.470 ms 00:21:09.132 [2024-12-05 12:24:39.946144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.132 [2024-12-05 12:24:39.968515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.132 [2024-12-05 12:24:39.968613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:09.132 [2024-12-05 12:24:39.968665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.307 ms 00:21:09.132 [2024-12-05 12:24:39.968687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.132 [2024-12-05 12:24:39.990549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.132 [2024-12-05 12:24:39.990645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:09.132 [2024-12-05 12:24:39.990695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.749 ms 00:21:09.132 [2024-12-05 12:24:39.990716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.132 [2024-12-05 12:24:39.990792] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:09.132 [2024-12-05 12:24:39.990822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.990856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.990885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.990916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.990990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.991987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.992016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.992082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.992115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.992144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.992174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.992232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:09.132 [2024-12-05 12:24:39.992266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.992986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.993981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:09.133 [2024-12-05 12:24:39.994301] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:09.133 [2024-12-05 12:24:39.994313] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b202c494-44a1-46c8-8ff5-771af2981a3d 00:21:09.133 [2024-12-05 12:24:39.994321] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:09.133 [2024-12-05 12:24:39.994330] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:09.133 [2024-12-05 12:24:39.994337] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:09.133 [2024-12-05 12:24:39.994349] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:09.133 [2024-12-05 12:24:39.994356] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:09.133 [2024-12-05 12:24:39.994365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:09.133 [2024-12-05 12:24:39.994373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:09.133 [2024-12-05 12:24:39.994381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:09.133 [2024-12-05 12:24:39.994387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:09.133 [2024-12-05 12:24:39.994396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.133 [2024-12-05 12:24:39.994404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:09.133 [2024-12-05 12:24:39.994414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.607 ms 00:21:09.133 [2024-12-05 12:24:39.994422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.393 [2024-12-05 12:24:40.007102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.393 [2024-12-05 12:24:40.007133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:09.393 [2024-12-05 12:24:40.007146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.617 ms 00:21:09.393 [2024-12-05 12:24:40.007153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.393 [2024-12-05 12:24:40.007572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.393 [2024-12-05 12:24:40.007593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:09.393 [2024-12-05 12:24:40.007604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:21:09.393 [2024-12-05 12:24:40.007611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.393 [2024-12-05 12:24:40.053593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.393 [2024-12-05 12:24:40.053631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:09.393 [2024-12-05 12:24:40.053644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.393 [2024-12-05 12:24:40.053652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.393 [2024-12-05 12:24:40.053741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.393 [2024-12-05 12:24:40.053751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:09.393 [2024-12-05 12:24:40.053761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.393 [2024-12-05 12:24:40.053768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.393 [2024-12-05 12:24:40.053828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.393 [2024-12-05 12:24:40.053838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:09.393 [2024-12-05 12:24:40.053852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.393 [2024-12-05 12:24:40.053860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.393 [2024-12-05 12:24:40.053893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.393 [2024-12-05 12:24:40.053901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:09.393 [2024-12-05 12:24:40.053911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.393 [2024-12-05 12:24:40.053919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.393 [2024-12-05 12:24:40.138203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.393 [2024-12-05 12:24:40.138249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:09.393 [2024-12-05 12:24:40.138262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.393 [2024-12-05 12:24:40.138270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.393 [2024-12-05 12:24:40.203708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.393 [2024-12-05 12:24:40.203754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:09.393 [2024-12-05 12:24:40.203767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.393 [2024-12-05 12:24:40.203775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.393 [2024-12-05 12:24:40.203875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.393 [2024-12-05 12:24:40.203885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:09.393 [2024-12-05 12:24:40.203898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.393 [2024-12-05 12:24:40.203907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.393 [2024-12-05 12:24:40.203957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.393 [2024-12-05 12:24:40.203965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:09.393 [2024-12-05 12:24:40.203975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.393 [2024-12-05 12:24:40.203982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.393 [2024-12-05 12:24:40.204094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.393 [2024-12-05 12:24:40.204103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:09.393 [2024-12-05 12:24:40.204113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.393 [2024-12-05 12:24:40.204122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.393 [2024-12-05 12:24:40.204177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.394 [2024-12-05 12:24:40.204187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:09.394 [2024-12-05 12:24:40.204197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.394 [2024-12-05 12:24:40.204204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.394 [2024-12-05 12:24:40.204258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.394 [2024-12-05 12:24:40.204267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:09.394 [2024-12-05 12:24:40.204278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.394 [2024-12-05 12:24:40.204285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.394 [2024-12-05 12:24:40.204355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.394 [2024-12-05 12:24:40.204365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:09.394 [2024-12-05 12:24:40.204374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.394 [2024-12-05 12:24:40.204382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.394 [2024-12-05 12:24:40.204613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 358.486 ms, result 0 00:21:09.394 true 00:21:09.394 12:24:40 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76724 00:21:09.394 12:24:40 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76724 ']' 00:21:09.394 12:24:40 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76724 00:21:09.394 12:24:40 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:09.394 12:24:40 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.394 12:24:40 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76724 00:21:09.394 killing process with pid 76724 00:21:09.394 12:24:40 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.394 12:24:40 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.394 12:24:40 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76724' 00:21:09.394 12:24:40 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76724 00:21:09.394 12:24:40 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76724 00:21:15.955 12:24:46 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:16.528 65536+0 records in 00:21:16.528 65536+0 records out 00:21:16.528 268435456 bytes (268 MB, 256 MiB) copied, 1.08667 s, 247 MB/s 00:21:16.528 12:24:47 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:16.528 [2024-12-05 12:24:47.379813] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:21:16.528 [2024-12-05 12:24:47.380887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76906 ] 00:21:16.788 [2024-12-05 12:24:47.551015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.047 [2024-12-05 12:24:47.657683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.047 [2024-12-05 12:24:47.891184] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:17.047 [2024-12-05 12:24:47.891245] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:17.308 [2024-12-05 12:24:48.048107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.308 [2024-12-05 12:24:48.048263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:17.308 [2024-12-05 12:24:48.048280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:17.308 [2024-12-05 12:24:48.048288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.308 [2024-12-05 12:24:48.050534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.308 [2024-12-05 12:24:48.050564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:17.308 [2024-12-05 12:24:48.050572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:21:17.308 [2024-12-05 12:24:48.050578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.308 [2024-12-05 12:24:48.050647] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:17.308 [2024-12-05 12:24:48.051202] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:17.308 [2024-12-05 12:24:48.051219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.308 [2024-12-05 12:24:48.051225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:17.308 [2024-12-05 12:24:48.051232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:21:17.308 [2024-12-05 12:24:48.051238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.308 [2024-12-05 12:24:48.052627] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:17.308 [2024-12-05 12:24:48.063431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.308 [2024-12-05 12:24:48.063551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:17.308 [2024-12-05 12:24:48.063659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.805 ms 00:21:17.308 [2024-12-05 12:24:48.063669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.308 [2024-12-05 12:24:48.063740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.308 [2024-12-05 12:24:48.063750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:17.308 [2024-12-05 12:24:48.063757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:17.308 [2024-12-05 12:24:48.063763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.308 [2024-12-05 12:24:48.070190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.308 [2024-12-05 12:24:48.070217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:17.308 [2024-12-05 12:24:48.070225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.392 ms 00:21:17.308 [2024-12-05 12:24:48.070231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.308 [2024-12-05 12:24:48.070305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.308 [2024-12-05 12:24:48.070314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:17.308 [2024-12-05 12:24:48.070321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:17.308 [2024-12-05 12:24:48.070327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.308 [2024-12-05 12:24:48.070346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.308 [2024-12-05 12:24:48.070353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:17.308 [2024-12-05 12:24:48.070359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:17.308 [2024-12-05 12:24:48.070365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.308 [2024-12-05 12:24:48.070384] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:17.308 [2024-12-05 12:24:48.073435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.308 [2024-12-05 12:24:48.073472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:17.308 [2024-12-05 12:24:48.073481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.056 ms 00:21:17.308 [2024-12-05 12:24:48.073488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.308 [2024-12-05 12:24:48.073520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.308 [2024-12-05 12:24:48.073526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:17.308 [2024-12-05 12:24:48.073533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:17.308 [2024-12-05 12:24:48.073539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.308 [2024-12-05 12:24:48.073556] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:17.308 [2024-12-05 12:24:48.073572] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:17.308 [2024-12-05 12:24:48.073600] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:17.308 [2024-12-05 12:24:48.073614] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:17.308 [2024-12-05 12:24:48.073696] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:17.308 [2024-12-05 12:24:48.073705] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:17.308 [2024-12-05 12:24:48.073714] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:17.308 [2024-12-05 12:24:48.073725] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:17.308 [2024-12-05 12:24:48.073732] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:17.308 [2024-12-05 12:24:48.073739] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:17.308 [2024-12-05 12:24:48.073745] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:17.308 [2024-12-05 12:24:48.073751] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:17.308 [2024-12-05 12:24:48.073758] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:17.308 [2024-12-05 12:24:48.073764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.308 [2024-12-05 12:24:48.073769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:17.308 [2024-12-05 12:24:48.073775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:21:17.308 [2024-12-05 12:24:48.073781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.308 [2024-12-05 12:24:48.073857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.308 [2024-12-05 12:24:48.073866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:17.308 [2024-12-05 12:24:48.073873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:17.308 [2024-12-05 12:24:48.073878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.308 [2024-12-05 12:24:48.073957] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:17.308 [2024-12-05 12:24:48.073966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:17.308 [2024-12-05 12:24:48.073972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:17.308 [2024-12-05 12:24:48.073979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.308 [2024-12-05 12:24:48.073985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:17.308 [2024-12-05 12:24:48.073991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:17.308 [2024-12-05 12:24:48.073996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:17.308 [2024-12-05 12:24:48.074001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:17.308 [2024-12-05 12:24:48.074006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:17.308 [2024-12-05 12:24:48.074011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:17.308 [2024-12-05 12:24:48.074016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:17.308 [2024-12-05 12:24:48.074026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:17.308 [2024-12-05 12:24:48.074032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:17.308 [2024-12-05 12:24:48.074037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:17.308 [2024-12-05 12:24:48.074042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:17.308 [2024-12-05 12:24:48.074047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.308 [2024-12-05 12:24:48.074051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:17.308 [2024-12-05 12:24:48.074057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:17.308 [2024-12-05 12:24:48.074063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.308 [2024-12-05 12:24:48.074068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:17.308 [2024-12-05 12:24:48.074074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:17.308 [2024-12-05 12:24:48.074079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.308 [2024-12-05 12:24:48.074084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:17.308 [2024-12-05 12:24:48.074089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:17.308 [2024-12-05 12:24:48.074094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.308 [2024-12-05 12:24:48.074099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:17.308 [2024-12-05 12:24:48.074104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:17.308 [2024-12-05 12:24:48.074109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.308 [2024-12-05 12:24:48.074115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:17.308 [2024-12-05 12:24:48.074120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:17.308 [2024-12-05 12:24:48.074125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.308 [2024-12-05 12:24:48.074130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:17.308 [2024-12-05 12:24:48.074136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:17.308 [2024-12-05 12:24:48.074141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:17.308 [2024-12-05 12:24:48.074146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:17.308 [2024-12-05 12:24:48.074151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:17.308 [2024-12-05 12:24:48.074156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:17.309 [2024-12-05 12:24:48.074161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:17.309 [2024-12-05 12:24:48.074167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:17.309 [2024-12-05 12:24:48.074172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.309 [2024-12-05 12:24:48.074178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:17.309 [2024-12-05 12:24:48.074183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:17.309 [2024-12-05 12:24:48.074188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.309 [2024-12-05 12:24:48.074193] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:17.309 [2024-12-05 12:24:48.074198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:17.309 [2024-12-05 12:24:48.074206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:17.309 [2024-12-05 12:24:48.074212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.309 [2024-12-05 12:24:48.074218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:17.309 [2024-12-05 12:24:48.074223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:17.309 [2024-12-05 12:24:48.074230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:17.309 [2024-12-05 12:24:48.074236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:17.309 [2024-12-05 12:24:48.074241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:17.309 [2024-12-05 12:24:48.074246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:17.309 [2024-12-05 12:24:48.074252] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:17.309 [2024-12-05 12:24:48.074259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:17.309 [2024-12-05 12:24:48.074266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:17.309 [2024-12-05 12:24:48.074271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:17.309 [2024-12-05 12:24:48.074276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:17.309 [2024-12-05 12:24:48.074282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:17.309 [2024-12-05 12:24:48.074288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:17.309 [2024-12-05 12:24:48.074293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:17.309 [2024-12-05 12:24:48.074298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:17.309 [2024-12-05 12:24:48.074304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:17.309 [2024-12-05 12:24:48.074309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:17.309 [2024-12-05 12:24:48.074316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:17.309 [2024-12-05 12:24:48.074322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:17.309 [2024-12-05 12:24:48.074327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:17.309 [2024-12-05 12:24:48.074332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:17.309 [2024-12-05 12:24:48.074338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:17.309 [2024-12-05 12:24:48.074344] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:17.309 [2024-12-05 12:24:48.074350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:17.309 [2024-12-05 12:24:48.074357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:17.309 [2024-12-05 12:24:48.074363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:17.309 [2024-12-05 12:24:48.074368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:17.309 [2024-12-05 12:24:48.074375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:17.309 [2024-12-05 12:24:48.074381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.309 [2024-12-05 12:24:48.074389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:17.309 [2024-12-05 12:24:48.074395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:21:17.309 [2024-12-05 12:24:48.074401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.309 [2024-12-05 12:24:48.098845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.309 [2024-12-05 12:24:48.098967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:17.309 [2024-12-05 12:24:48.098980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.391 ms 00:21:17.309 [2024-12-05 12:24:48.098987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.309 [2024-12-05 12:24:48.099085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.309 [2024-12-05 12:24:48.099093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:17.309 [2024-12-05 12:24:48.099099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:17.309 [2024-12-05 12:24:48.099109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.309 [2024-12-05 12:24:48.143004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.309 [2024-12-05 12:24:48.143037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:17.309 [2024-12-05 12:24:48.143049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.877 ms 00:21:17.309 [2024-12-05 12:24:48.143056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.309 [2024-12-05 12:24:48.143131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.309 [2024-12-05 12:24:48.143140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:17.309 [2024-12-05 12:24:48.143147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:17.309 [2024-12-05 12:24:48.143158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.309 [2024-12-05 12:24:48.143579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.309 [2024-12-05 12:24:48.143595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:17.309 [2024-12-05 12:24:48.143607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:21:17.309 [2024-12-05 12:24:48.143613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.309 [2024-12-05 12:24:48.143727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.309 [2024-12-05 12:24:48.143764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:17.309 [2024-12-05 12:24:48.143773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:21:17.309 [2024-12-05 12:24:48.143779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.309 [2024-12-05 12:24:48.156055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.309 [2024-12-05 12:24:48.156083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:17.309 [2024-12-05 12:24:48.156091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.258 ms 00:21:17.309 [2024-12-05 12:24:48.156098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.309 [2024-12-05 12:24:48.166640] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:17.309 [2024-12-05 12:24:48.166753] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:17.309 [2024-12-05 12:24:48.166766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.309 [2024-12-05 12:24:48.166773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:17.309 [2024-12-05 12:24:48.166781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.590 ms 00:21:17.309 [2024-12-05 12:24:48.166787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.568 [2024-12-05 12:24:48.185509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.568 [2024-12-05 12:24:48.185603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:17.568 [2024-12-05 12:24:48.185617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.667 ms 00:21:17.568 [2024-12-05 12:24:48.185624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.568 [2024-12-05 12:24:48.194909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.568 [2024-12-05 12:24:48.194936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:17.568 [2024-12-05 12:24:48.194944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.229 ms 00:21:17.568 [2024-12-05 12:24:48.194949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.568 [2024-12-05 12:24:48.204116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.569 [2024-12-05 12:24:48.204142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:17.569 [2024-12-05 12:24:48.204151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.123 ms 00:21:17.569 [2024-12-05 12:24:48.204157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.569 [2024-12-05 12:24:48.204639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.569 [2024-12-05 12:24:48.204684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:17.569 [2024-12-05 12:24:48.204693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:21:17.569 [2024-12-05 12:24:48.204700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.569 [2024-12-05 12:24:48.253404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.569 [2024-12-05 12:24:48.253442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:17.569 [2024-12-05 12:24:48.253454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.684 ms 00:21:17.569 [2024-12-05 12:24:48.253473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.569 [2024-12-05 12:24:48.261564] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:17.569 [2024-12-05 12:24:48.276524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.569 [2024-12-05 12:24:48.276555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:17.569 [2024-12-05 12:24:48.276566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.969 ms 00:21:17.569 [2024-12-05 12:24:48.276573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.569 [2024-12-05 12:24:48.276645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.569 [2024-12-05 12:24:48.276655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:17.569 [2024-12-05 12:24:48.276663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:17.569 [2024-12-05 12:24:48.276669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.569 [2024-12-05 12:24:48.276710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.569 [2024-12-05 12:24:48.276718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:17.569 [2024-12-05 12:24:48.276724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:17.569 [2024-12-05 12:24:48.276730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.569 [2024-12-05 12:24:48.276760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.569 [2024-12-05 12:24:48.276770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:17.569 [2024-12-05 12:24:48.276777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:17.569 [2024-12-05 12:24:48.276783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.569 [2024-12-05 12:24:48.276811] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:17.569 [2024-12-05 12:24:48.276819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.569 [2024-12-05 12:24:48.276825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:17.569 [2024-12-05 12:24:48.276832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:17.569 [2024-12-05 12:24:48.276838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.569 [2024-12-05 12:24:48.295652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.569 [2024-12-05 12:24:48.295682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:17.569 [2024-12-05 12:24:48.295691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.795 ms 00:21:17.569 [2024-12-05 12:24:48.295698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.569 [2024-12-05 12:24:48.295778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.569 [2024-12-05 12:24:48.295787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:17.569 [2024-12-05 12:24:48.295795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:17.569 [2024-12-05 12:24:48.295801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.569 [2024-12-05 12:24:48.296593] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:17.569 [2024-12-05 12:24:48.298896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 248.215 ms, result 0 00:21:17.569 [2024-12-05 12:24:48.299697] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:17.569 [2024-12-05 12:24:48.314531] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:18.504  [2024-12-05T12:24:50.756Z] Copying: 20/256 [MB] (20 MBps) [2024-12-05T12:24:51.329Z] Copying: 41/256 [MB] (20 MBps) [2024-12-05T12:24:52.709Z] Copying: 60/256 [MB] (19 MBps) [2024-12-05T12:24:53.646Z] Copying: 82/256 [MB] (21 MBps) [2024-12-05T12:24:54.591Z] Copying: 104/256 [MB] (21 MBps) [2024-12-05T12:24:55.531Z] Copying: 115/256 [MB] (11 MBps) [2024-12-05T12:24:56.462Z] Copying: 127/256 [MB] (12 MBps) [2024-12-05T12:24:57.401Z] Copying: 142/256 [MB] (14 MBps) [2024-12-05T12:24:58.341Z] Copying: 154/256 [MB] (12 MBps) [2024-12-05T12:24:59.723Z] Copying: 166/256 [MB] (11 MBps) [2024-12-05T12:25:00.664Z] Copying: 179/256 [MB] (13 MBps) [2024-12-05T12:25:01.598Z] Copying: 191/256 [MB] (11 MBps) [2024-12-05T12:25:02.529Z] Copying: 205/256 [MB] (14 MBps) [2024-12-05T12:25:03.461Z] Copying: 226/256 [MB] (20 MBps) [2024-12-05T12:25:04.394Z] Copying: 240/256 [MB] (14 MBps) [2024-12-05T12:25:04.394Z] Copying: 255/256 [MB] (14 MBps) [2024-12-05T12:25:04.394Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-05 12:25:04.354507] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:33.525 [2024-12-05 12:25:04.362032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.525 [2024-12-05 12:25:04.362065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:33.525 [2024-12-05 12:25:04.362077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:33.525 [2024-12-05 12:25:04.362087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.525 [2024-12-05 12:25:04.362105] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:33.525 [2024-12-05 12:25:04.364325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.525 [2024-12-05 12:25:04.364350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:33.525 [2024-12-05 12:25:04.364359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.209 ms 00:21:33.525 [2024-12-05 12:25:04.364366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.525 [2024-12-05 12:25:04.366603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.525 [2024-12-05 12:25:04.366631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:33.525 [2024-12-05 12:25:04.366639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.219 ms 00:21:33.525 [2024-12-05 12:25:04.366645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.525 [2024-12-05 12:25:04.373355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.525 [2024-12-05 12:25:04.373387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:33.525 [2024-12-05 12:25:04.373394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.696 ms 00:21:33.525 [2024-12-05 12:25:04.373401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.525 [2024-12-05 12:25:04.378655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.525 [2024-12-05 12:25:04.378678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:33.525 [2024-12-05 12:25:04.378685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.219 ms 00:21:33.525 [2024-12-05 12:25:04.378691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.784 [2024-12-05 12:25:04.397201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.784 [2024-12-05 12:25:04.397230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:33.784 [2024-12-05 12:25:04.397240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.476 ms 00:21:33.784 [2024-12-05 12:25:04.397246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.784 [2024-12-05 12:25:04.409660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.784 [2024-12-05 12:25:04.409690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:33.784 [2024-12-05 12:25:04.409701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.386 ms 00:21:33.784 [2024-12-05 12:25:04.409708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.784 [2024-12-05 12:25:04.409805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.784 [2024-12-05 12:25:04.409813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:33.784 [2024-12-05 12:25:04.409819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:33.784 [2024-12-05 12:25:04.409833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.784 [2024-12-05 12:25:04.428370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.784 [2024-12-05 12:25:04.428396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:33.784 [2024-12-05 12:25:04.428404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.525 ms 00:21:33.784 [2024-12-05 12:25:04.428409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.784 [2024-12-05 12:25:04.446675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.784 [2024-12-05 12:25:04.446700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:33.784 [2024-12-05 12:25:04.446707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.238 ms 00:21:33.784 [2024-12-05 12:25:04.446713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.784 [2024-12-05 12:25:04.464198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.784 [2024-12-05 12:25:04.464223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:33.784 [2024-12-05 12:25:04.464231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.458 ms 00:21:33.784 [2024-12-05 12:25:04.464236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.784 [2024-12-05 12:25:04.481713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.784 [2024-12-05 12:25:04.481739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:33.784 [2024-12-05 12:25:04.481746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.417 ms 00:21:33.784 [2024-12-05 12:25:04.481752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.784 [2024-12-05 12:25:04.481778] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:33.784 [2024-12-05 12:25:04.481791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:33.784 [2024-12-05 12:25:04.481799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:33.784 [2024-12-05 12:25:04.481806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:33.784 [2024-12-05 12:25:04.481812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:33.784 [2024-12-05 12:25:04.481818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:33.784 [2024-12-05 12:25:04.481823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:33.784 [2024-12-05 12:25:04.481830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:33.784 [2024-12-05 12:25:04.481836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:33.784 [2024-12-05 12:25:04.481842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:33.784 [2024-12-05 12:25:04.481848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:33.784 [2024-12-05 12:25:04.481854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:33.784 [2024-12-05 12:25:04.481860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:33.784 [2024-12-05 12:25:04.481865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:33.784 [2024-12-05 12:25:04.481871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.481998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:33.785 [2024-12-05 12:25:04.482394] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:33.785 [2024-12-05 12:25:04.482401] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b202c494-44a1-46c8-8ff5-771af2981a3d 00:21:33.785 [2024-12-05 12:25:04.482408] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:33.785 [2024-12-05 12:25:04.482414] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:33.786 [2024-12-05 12:25:04.482419] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:33.786 [2024-12-05 12:25:04.482425] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:33.786 [2024-12-05 12:25:04.482430] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:33.786 [2024-12-05 12:25:04.482437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:33.786 [2024-12-05 12:25:04.482443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:33.786 [2024-12-05 12:25:04.482448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:33.786 [2024-12-05 12:25:04.482453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:33.786 [2024-12-05 12:25:04.482458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.786 [2024-12-05 12:25:04.482476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:33.786 [2024-12-05 12:25:04.482483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:21:33.786 [2024-12-05 12:25:04.482489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.492491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.786 [2024-12-05 12:25:04.492515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:33.786 [2024-12-05 12:25:04.492523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.989 ms 00:21:33.786 [2024-12-05 12:25:04.492529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.492829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.786 [2024-12-05 12:25:04.492837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:33.786 [2024-12-05 12:25:04.492844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:21:33.786 [2024-12-05 12:25:04.492849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.522048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.786 [2024-12-05 12:25:04.522076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:33.786 [2024-12-05 12:25:04.522084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.786 [2024-12-05 12:25:04.522091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.522155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.786 [2024-12-05 12:25:04.522162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:33.786 [2024-12-05 12:25:04.522168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.786 [2024-12-05 12:25:04.522174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.522208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.786 [2024-12-05 12:25:04.522216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:33.786 [2024-12-05 12:25:04.522222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.786 [2024-12-05 12:25:04.522228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.522242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.786 [2024-12-05 12:25:04.522252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:33.786 [2024-12-05 12:25:04.522258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.786 [2024-12-05 12:25:04.522264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.586067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.786 [2024-12-05 12:25:04.586102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:33.786 [2024-12-05 12:25:04.586112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.786 [2024-12-05 12:25:04.586118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.637908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.786 [2024-12-05 12:25:04.637946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:33.786 [2024-12-05 12:25:04.637956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.786 [2024-12-05 12:25:04.637963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.638016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.786 [2024-12-05 12:25:04.638024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:33.786 [2024-12-05 12:25:04.638031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.786 [2024-12-05 12:25:04.638037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.638062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.786 [2024-12-05 12:25:04.638070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:33.786 [2024-12-05 12:25:04.638080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.786 [2024-12-05 12:25:04.638087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.638166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.786 [2024-12-05 12:25:04.638175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:33.786 [2024-12-05 12:25:04.638182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.786 [2024-12-05 12:25:04.638189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.638216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.786 [2024-12-05 12:25:04.638223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:33.786 [2024-12-05 12:25:04.638230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.786 [2024-12-05 12:25:04.638238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.638275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.786 [2024-12-05 12:25:04.638282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:33.786 [2024-12-05 12:25:04.638288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.786 [2024-12-05 12:25:04.638294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.638337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:33.786 [2024-12-05 12:25:04.638345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:33.786 [2024-12-05 12:25:04.638354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:33.786 [2024-12-05 12:25:04.638360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.786 [2024-12-05 12:25:04.638496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.453 ms, result 0 00:21:34.729 00:21:34.729 00:21:34.729 12:25:05 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=77097 00:21:34.729 12:25:05 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 77097 00:21:34.729 12:25:05 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77097 ']' 00:21:34.729 12:25:05 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.729 12:25:05 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.729 12:25:05 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:34.729 12:25:05 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.729 12:25:05 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.729 12:25:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:34.729 [2024-12-05 12:25:05.409063] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:21:34.729 [2024-12-05 12:25:05.409231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77097 ] 00:21:34.729 [2024-12-05 12:25:05.567864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.009 [2024-12-05 12:25:05.660920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.573 12:25:06 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.573 12:25:06 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:35.573 12:25:06 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:35.832 [2024-12-05 12:25:06.442356] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.832 [2024-12-05 12:25:06.442416] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.832 [2024-12-05 12:25:06.614624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.833 [2024-12-05 12:25:06.614665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:35.833 [2024-12-05 12:25:06.614679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:35.833 [2024-12-05 12:25:06.614686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.833 [2024-12-05 12:25:06.616882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.833 [2024-12-05 12:25:06.616913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.833 [2024-12-05 12:25:06.616922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.180 ms 00:21:35.833 [2024-12-05 12:25:06.616928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.833 [2024-12-05 12:25:06.616993] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:35.833 [2024-12-05 12:25:06.617576] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:35.833 [2024-12-05 12:25:06.617602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.833 [2024-12-05 12:25:06.617609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.833 [2024-12-05 12:25:06.617617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:21:35.833 [2024-12-05 12:25:06.617623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.833 [2024-12-05 12:25:06.619179] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:35.833 [2024-12-05 12:25:06.629450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.833 [2024-12-05 12:25:06.629496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:35.833 [2024-12-05 12:25:06.629507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.276 ms 00:21:35.833 [2024-12-05 12:25:06.629515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.833 [2024-12-05 12:25:06.629587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.833 [2024-12-05 12:25:06.629598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:35.833 [2024-12-05 12:25:06.629605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:35.833 [2024-12-05 12:25:06.629612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.833 [2024-12-05 12:25:06.635856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.833 [2024-12-05 12:25:06.635887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.833 [2024-12-05 12:25:06.635895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.207 ms 00:21:35.833 [2024-12-05 12:25:06.635902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.833 [2024-12-05 12:25:06.635984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.833 [2024-12-05 12:25:06.635993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.833 [2024-12-05 12:25:06.636000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:35.833 [2024-12-05 12:25:06.636009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.833 [2024-12-05 12:25:06.636029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.833 [2024-12-05 12:25:06.636037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:35.833 [2024-12-05 12:25:06.636043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:35.833 [2024-12-05 12:25:06.636050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.833 [2024-12-05 12:25:06.636069] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:35.833 [2024-12-05 12:25:06.639186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.833 [2024-12-05 12:25:06.639210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.833 [2024-12-05 12:25:06.639220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.121 ms 00:21:35.833 [2024-12-05 12:25:06.639226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.833 [2024-12-05 12:25:06.639259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.833 [2024-12-05 12:25:06.639265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:35.833 [2024-12-05 12:25:06.639273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:35.833 [2024-12-05 12:25:06.639280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.833 [2024-12-05 12:25:06.639298] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:35.833 [2024-12-05 12:25:06.639314] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:35.833 [2024-12-05 12:25:06.639348] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:35.833 [2024-12-05 12:25:06.639360] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:35.833 [2024-12-05 12:25:06.639445] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:35.833 [2024-12-05 12:25:06.639454] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:35.833 [2024-12-05 12:25:06.639480] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:35.833 [2024-12-05 12:25:06.639488] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:35.833 [2024-12-05 12:25:06.639497] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:35.833 [2024-12-05 12:25:06.639505] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:35.833 [2024-12-05 12:25:06.639512] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:35.833 [2024-12-05 12:25:06.639519] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:35.833 [2024-12-05 12:25:06.639528] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:35.833 [2024-12-05 12:25:06.639534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.833 [2024-12-05 12:25:06.639541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:35.833 [2024-12-05 12:25:06.639547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:21:35.833 [2024-12-05 12:25:06.639554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.833 [2024-12-05 12:25:06.639632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.833 [2024-12-05 12:25:06.639642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:35.833 [2024-12-05 12:25:06.639648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:35.833 [2024-12-05 12:25:06.639655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.833 [2024-12-05 12:25:06.639733] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:35.833 [2024-12-05 12:25:06.639750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:35.833 [2024-12-05 12:25:06.639757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.833 [2024-12-05 12:25:06.639765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.833 [2024-12-05 12:25:06.639771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:35.833 [2024-12-05 12:25:06.639778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:35.833 [2024-12-05 12:25:06.639784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:35.833 [2024-12-05 12:25:06.639795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:35.833 [2024-12-05 12:25:06.639802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:35.833 [2024-12-05 12:25:06.639809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.833 [2024-12-05 12:25:06.639814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:35.833 [2024-12-05 12:25:06.639822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:35.833 [2024-12-05 12:25:06.639828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.833 [2024-12-05 12:25:06.639835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:35.833 [2024-12-05 12:25:06.639840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:35.833 [2024-12-05 12:25:06.639847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.833 [2024-12-05 12:25:06.639852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:35.833 [2024-12-05 12:25:06.639859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:35.833 [2024-12-05 12:25:06.639869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.833 [2024-12-05 12:25:06.639876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:35.833 [2024-12-05 12:25:06.639881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:35.833 [2024-12-05 12:25:06.639888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.833 [2024-12-05 12:25:06.639893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:35.833 [2024-12-05 12:25:06.639901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:35.833 [2024-12-05 12:25:06.639906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.833 [2024-12-05 12:25:06.639913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:35.833 [2024-12-05 12:25:06.639918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:35.833 [2024-12-05 12:25:06.639925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.833 [2024-12-05 12:25:06.639931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:35.833 [2024-12-05 12:25:06.639937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:35.833 [2024-12-05 12:25:06.639942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.833 [2024-12-05 12:25:06.639950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:35.833 [2024-12-05 12:25:06.639955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:35.833 [2024-12-05 12:25:06.639963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.833 [2024-12-05 12:25:06.639968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:35.833 [2024-12-05 12:25:06.639975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:35.833 [2024-12-05 12:25:06.639981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.833 [2024-12-05 12:25:06.639988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:35.833 [2024-12-05 12:25:06.639993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:35.833 [2024-12-05 12:25:06.640002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.833 [2024-12-05 12:25:06.640007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:35.833 [2024-12-05 12:25:06.640015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:35.833 [2024-12-05 12:25:06.640021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.833 [2024-12-05 12:25:06.640028] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:35.833 [2024-12-05 12:25:06.640036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:35.833 [2024-12-05 12:25:06.640043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.833 [2024-12-05 12:25:06.640049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.833 [2024-12-05 12:25:06.640057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:35.833 [2024-12-05 12:25:06.640063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:35.833 [2024-12-05 12:25:06.640070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:35.833 [2024-12-05 12:25:06.640075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:35.833 [2024-12-05 12:25:06.640082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:35.833 [2024-12-05 12:25:06.640087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:35.833 [2024-12-05 12:25:06.640095] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:35.833 [2024-12-05 12:25:06.640102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.833 [2024-12-05 12:25:06.640112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:35.833 [2024-12-05 12:25:06.640118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:35.833 [2024-12-05 12:25:06.640126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:35.833 [2024-12-05 12:25:06.640132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:35.833 [2024-12-05 12:25:06.640138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:35.833 [2024-12-05 12:25:06.640144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:35.833 [2024-12-05 12:25:06.640151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:35.833 [2024-12-05 12:25:06.640156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:35.833 [2024-12-05 12:25:06.640168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:35.833 [2024-12-05 12:25:06.640174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:35.834 [2024-12-05 12:25:06.640181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:35.834 [2024-12-05 12:25:06.640186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:35.834 [2024-12-05 12:25:06.640193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:35.834 [2024-12-05 12:25:06.640200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:35.834 [2024-12-05 12:25:06.640207] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:35.834 [2024-12-05 12:25:06.640213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.834 [2024-12-05 12:25:06.640222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:35.834 [2024-12-05 12:25:06.640228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:35.834 [2024-12-05 12:25:06.640235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:35.834 [2024-12-05 12:25:06.640241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:35.834 [2024-12-05 12:25:06.640248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.834 [2024-12-05 12:25:06.640254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:35.834 [2024-12-05 12:25:06.640261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:21:35.834 [2024-12-05 12:25:06.640268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.834 [2024-12-05 12:25:06.664505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.834 [2024-12-05 12:25:06.664533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.834 [2024-12-05 12:25:06.664544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.181 ms 00:21:35.834 [2024-12-05 12:25:06.664553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.834 [2024-12-05 12:25:06.664648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.834 [2024-12-05 12:25:06.664658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:35.834 [2024-12-05 12:25:06.664667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:35.834 [2024-12-05 12:25:06.664674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.834 [2024-12-05 12:25:06.691045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.834 [2024-12-05 12:25:06.691074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.834 [2024-12-05 12:25:06.691084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.352 ms 00:21:35.834 [2024-12-05 12:25:06.691091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.834 [2024-12-05 12:25:06.691139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.834 [2024-12-05 12:25:06.691146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.834 [2024-12-05 12:25:06.691155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:35.834 [2024-12-05 12:25:06.691161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.834 [2024-12-05 12:25:06.691564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.834 [2024-12-05 12:25:06.691585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.834 [2024-12-05 12:25:06.691596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:21:35.834 [2024-12-05 12:25:06.691603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.834 [2024-12-05 12:25:06.691719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.834 [2024-12-05 12:25:06.691726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.834 [2024-12-05 12:25:06.691734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:35.834 [2024-12-05 12:25:06.691740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.092 [2024-12-05 12:25:06.705213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.092 [2024-12-05 12:25:06.705240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:36.092 [2024-12-05 12:25:06.705250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.453 ms 00:21:36.092 [2024-12-05 12:25:06.705256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.092 [2024-12-05 12:25:06.725735] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:36.092 [2024-12-05 12:25:06.725779] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:36.092 [2024-12-05 12:25:06.725795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.092 [2024-12-05 12:25:06.725805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:36.092 [2024-12-05 12:25:06.725816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.458 ms 00:21:36.092 [2024-12-05 12:25:06.725830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.092 [2024-12-05 12:25:06.746097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.092 [2024-12-05 12:25:06.746126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:36.092 [2024-12-05 12:25:06.746137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.204 ms 00:21:36.092 [2024-12-05 12:25:06.746144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.092 [2024-12-05 12:25:06.755111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.092 [2024-12-05 12:25:06.755138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:36.093 [2024-12-05 12:25:06.755150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.908 ms 00:21:36.093 [2024-12-05 12:25:06.755157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.093 [2024-12-05 12:25:06.763548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.093 [2024-12-05 12:25:06.763574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:36.093 [2024-12-05 12:25:06.763583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.348 ms 00:21:36.093 [2024-12-05 12:25:06.763589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.093 [2024-12-05 12:25:06.764059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.093 [2024-12-05 12:25:06.764077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:36.093 [2024-12-05 12:25:06.764086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:21:36.093 [2024-12-05 12:25:06.764092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.093 [2024-12-05 12:25:06.812317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.093 [2024-12-05 12:25:06.812355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:36.093 [2024-12-05 12:25:06.812367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.205 ms 00:21:36.093 [2024-12-05 12:25:06.812374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.093 [2024-12-05 12:25:06.820594] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:36.093 [2024-12-05 12:25:06.835208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.093 [2024-12-05 12:25:06.835245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:36.093 [2024-12-05 12:25:06.835256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.772 ms 00:21:36.093 [2024-12-05 12:25:06.835264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.093 [2024-12-05 12:25:06.835346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.093 [2024-12-05 12:25:06.835356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:36.093 [2024-12-05 12:25:06.835363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:36.093 [2024-12-05 12:25:06.835372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.093 [2024-12-05 12:25:06.835420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.093 [2024-12-05 12:25:06.835429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:36.093 [2024-12-05 12:25:06.835435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:36.093 [2024-12-05 12:25:06.835445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.093 [2024-12-05 12:25:06.835479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.093 [2024-12-05 12:25:06.835488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:36.093 [2024-12-05 12:25:06.835495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:36.093 [2024-12-05 12:25:06.835505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.093 [2024-12-05 12:25:06.835603] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:36.093 [2024-12-05 12:25:06.835615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.093 [2024-12-05 12:25:06.835624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:36.093 [2024-12-05 12:25:06.835632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:36.093 [2024-12-05 12:25:06.835638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.093 [2024-12-05 12:25:06.854141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.093 [2024-12-05 12:25:06.854170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:36.093 [2024-12-05 12:25:06.854181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.483 ms 00:21:36.093 [2024-12-05 12:25:06.854188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.093 [2024-12-05 12:25:06.854265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.093 [2024-12-05 12:25:06.854273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:36.093 [2024-12-05 12:25:06.854282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:36.093 [2024-12-05 12:25:06.854290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.093 [2024-12-05 12:25:06.855160] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:36.093 [2024-12-05 12:25:06.857540] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 240.280 ms, result 0 00:21:36.093 [2024-12-05 12:25:06.858680] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:36.093 Some configs were skipped because the RPC state that can call them passed over. 00:21:36.093 12:25:06 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:36.351 true 00:21:36.351 [2024-12-05 12:25:07.084015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.351 [2024-12-05 12:25:07.084057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:36.351 [2024-12-05 12:25:07.084067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.619 ms 00:21:36.351 [2024-12-05 12:25:07.084076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.351 [2024-12-05 12:25:07.084104] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.710 ms, result 0 00:21:36.351 12:25:07 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:36.609 [2024-12-05 12:25:07.283568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.609 [2024-12-05 12:25:07.283602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:36.609 [2024-12-05 12:25:07.283612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.003 ms 00:21:36.609 [2024-12-05 12:25:07.283618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.609 [2024-12-05 12:25:07.283645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.079 ms, result 0 00:21:36.609 true 00:21:36.609 12:25:07 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 77097 00:21:36.609 12:25:07 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77097 ']' 00:21:36.609 12:25:07 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77097 00:21:36.609 12:25:07 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:36.609 12:25:07 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.609 12:25:07 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77097 00:21:36.609 12:25:07 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:36.609 killing process with pid 77097 00:21:36.609 12:25:07 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:36.609 12:25:07 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77097' 00:21:36.609 12:25:07 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77097 00:21:36.609 12:25:07 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77097 00:21:37.176 [2024-12-05 12:25:07.885813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.176 [2024-12-05 12:25:07.885868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:37.176 [2024-12-05 12:25:07.885881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:37.176 [2024-12-05 12:25:07.885889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.176 [2024-12-05 12:25:07.885910] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:37.176 [2024-12-05 12:25:07.888044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.176 [2024-12-05 12:25:07.888072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:37.176 [2024-12-05 12:25:07.888085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.119 ms 00:21:37.176 [2024-12-05 12:25:07.888091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.176 [2024-12-05 12:25:07.888348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.176 [2024-12-05 12:25:07.888364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:37.176 [2024-12-05 12:25:07.888373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:21:37.176 [2024-12-05 12:25:07.888379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.176 [2024-12-05 12:25:07.891706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.176 [2024-12-05 12:25:07.891734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:37.176 [2024-12-05 12:25:07.891746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.310 ms 00:21:37.176 [2024-12-05 12:25:07.891752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.176 [2024-12-05 12:25:07.896993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.176 [2024-12-05 12:25:07.897021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:37.176 [2024-12-05 12:25:07.897030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.208 ms 00:21:37.176 [2024-12-05 12:25:07.897036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.176 [2024-12-05 12:25:07.904809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.176 [2024-12-05 12:25:07.904843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:37.176 [2024-12-05 12:25:07.904854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.705 ms 00:21:37.176 [2024-12-05 12:25:07.904860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.176 [2024-12-05 12:25:07.912077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.176 [2024-12-05 12:25:07.912108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:37.176 [2024-12-05 12:25:07.912119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.184 ms 00:21:37.176 [2024-12-05 12:25:07.912125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.176 [2024-12-05 12:25:07.912236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.176 [2024-12-05 12:25:07.912244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:37.176 [2024-12-05 12:25:07.912252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:37.176 [2024-12-05 12:25:07.912259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.176 [2024-12-05 12:25:07.920204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.176 [2024-12-05 12:25:07.920230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:37.176 [2024-12-05 12:25:07.920239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.927 ms 00:21:37.176 [2024-12-05 12:25:07.920245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.176 [2024-12-05 12:25:07.927662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.176 [2024-12-05 12:25:07.927688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:37.176 [2024-12-05 12:25:07.927698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.386 ms 00:21:37.176 [2024-12-05 12:25:07.927704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.176 [2024-12-05 12:25:07.934708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.176 [2024-12-05 12:25:07.934733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:37.176 [2024-12-05 12:25:07.934742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.973 ms 00:21:37.176 [2024-12-05 12:25:07.934747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.176 [2024-12-05 12:25:07.942005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.176 [2024-12-05 12:25:07.942031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:37.176 [2024-12-05 12:25:07.942039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.203 ms 00:21:37.176 [2024-12-05 12:25:07.942045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.176 [2024-12-05 12:25:07.942074] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:37.176 [2024-12-05 12:25:07.942085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:37.176 [2024-12-05 12:25:07.942095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:37.176 [2024-12-05 12:25:07.942101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:37.176 [2024-12-05 12:25:07.942109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:37.176 [2024-12-05 12:25:07.942115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:37.176 [2024-12-05 12:25:07.942123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:37.176 [2024-12-05 12:25:07.942129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:37.176 [2024-12-05 12:25:07.942136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:37.176 [2024-12-05 12:25:07.942142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:37.176 [2024-12-05 12:25:07.942149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:37.176 [2024-12-05 12:25:07.942155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:37.176 [2024-12-05 12:25:07.942162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:37.177 [2024-12-05 12:25:07.942784] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:37.177 [2024-12-05 12:25:07.942795] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b202c494-44a1-46c8-8ff5-771af2981a3d 00:21:37.177 [2024-12-05 12:25:07.942804] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:37.177 [2024-12-05 12:25:07.942811] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:37.177 [2024-12-05 12:25:07.942817] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:37.177 [2024-12-05 12:25:07.942825] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:37.177 [2024-12-05 12:25:07.942831] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:37.177 [2024-12-05 12:25:07.942840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:37.178 [2024-12-05 12:25:07.942846] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:37.178 [2024-12-05 12:25:07.942852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:37.178 [2024-12-05 12:25:07.942857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:37.178 [2024-12-05 12:25:07.942863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.178 [2024-12-05 12:25:07.942870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:37.178 [2024-12-05 12:25:07.942878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:21:37.178 [2024-12-05 12:25:07.942885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.178 [2024-12-05 12:25:07.953006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.178 [2024-12-05 12:25:07.953032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:37.178 [2024-12-05 12:25:07.953044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.102 ms 00:21:37.178 [2024-12-05 12:25:07.953051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.178 [2024-12-05 12:25:07.953375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.178 [2024-12-05 12:25:07.953393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:37.178 [2024-12-05 12:25:07.953404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:21:37.178 [2024-12-05 12:25:07.953410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.178 [2024-12-05 12:25:07.990574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.178 [2024-12-05 12:25:07.990603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.178 [2024-12-05 12:25:07.990613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.178 [2024-12-05 12:25:07.990620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.178 [2024-12-05 12:25:07.990700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.178 [2024-12-05 12:25:07.990707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.178 [2024-12-05 12:25:07.990717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.178 [2024-12-05 12:25:07.990723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.178 [2024-12-05 12:25:07.990762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.178 [2024-12-05 12:25:07.990769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.178 [2024-12-05 12:25:07.990779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.178 [2024-12-05 12:25:07.990784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.178 [2024-12-05 12:25:07.990801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.178 [2024-12-05 12:25:07.990809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.178 [2024-12-05 12:25:07.990816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.178 [2024-12-05 12:25:07.990823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.436 [2024-12-05 12:25:08.053522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.436 [2024-12-05 12:25:08.053569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.436 [2024-12-05 12:25:08.053581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.436 [2024-12-05 12:25:08.053587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.436 [2024-12-05 12:25:08.104969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.436 [2024-12-05 12:25:08.105008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.436 [2024-12-05 12:25:08.105019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.436 [2024-12-05 12:25:08.105028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.436 [2024-12-05 12:25:08.105106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.436 [2024-12-05 12:25:08.105114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:37.436 [2024-12-05 12:25:08.105124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.436 [2024-12-05 12:25:08.105130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.436 [2024-12-05 12:25:08.105159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.436 [2024-12-05 12:25:08.105166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:37.436 [2024-12-05 12:25:08.105174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.436 [2024-12-05 12:25:08.105188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.436 [2024-12-05 12:25:08.105269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.436 [2024-12-05 12:25:08.105278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:37.436 [2024-12-05 12:25:08.105286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.436 [2024-12-05 12:25:08.105293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.436 [2024-12-05 12:25:08.105323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.436 [2024-12-05 12:25:08.105330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:37.436 [2024-12-05 12:25:08.105338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.436 [2024-12-05 12:25:08.105344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.436 [2024-12-05 12:25:08.105384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.436 [2024-12-05 12:25:08.105391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:37.436 [2024-12-05 12:25:08.105400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.436 [2024-12-05 12:25:08.105406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.436 [2024-12-05 12:25:08.105453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.436 [2024-12-05 12:25:08.105475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:37.436 [2024-12-05 12:25:08.105484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.436 [2024-12-05 12:25:08.105492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.436 [2024-12-05 12:25:08.105622] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 219.785 ms, result 0 00:21:38.002 12:25:08 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:38.002 12:25:08 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:38.002 [2024-12-05 12:25:08.742232] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:21:38.002 [2024-12-05 12:25:08.742356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77145 ] 00:21:38.260 [2024-12-05 12:25:08.899845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.260 [2024-12-05 12:25:09.003266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.518 [2024-12-05 12:25:09.236241] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:38.518 [2024-12-05 12:25:09.236305] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:38.781 [2024-12-05 12:25:09.392893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.781 [2024-12-05 12:25:09.392932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:38.781 [2024-12-05 12:25:09.392944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:38.781 [2024-12-05 12:25:09.392950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.781 [2024-12-05 12:25:09.395160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.781 [2024-12-05 12:25:09.395187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:38.781 [2024-12-05 12:25:09.395195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.197 ms 00:21:38.781 [2024-12-05 12:25:09.395201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.781 [2024-12-05 12:25:09.395265] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:38.781 [2024-12-05 12:25:09.395844] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:38.781 [2024-12-05 12:25:09.395862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.781 [2024-12-05 12:25:09.395869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:38.781 [2024-12-05 12:25:09.395876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:21:38.781 [2024-12-05 12:25:09.395883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.781 [2024-12-05 12:25:09.397261] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:38.781 [2024-12-05 12:25:09.407690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.781 [2024-12-05 12:25:09.407714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:38.781 [2024-12-05 12:25:09.407724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.430 ms 00:21:38.781 [2024-12-05 12:25:09.407730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.781 [2024-12-05 12:25:09.407807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.781 [2024-12-05 12:25:09.407817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:38.781 [2024-12-05 12:25:09.407824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:38.781 [2024-12-05 12:25:09.407830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.781 [2024-12-05 12:25:09.414138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.781 [2024-12-05 12:25:09.414159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:38.781 [2024-12-05 12:25:09.414167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.275 ms 00:21:38.781 [2024-12-05 12:25:09.414173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.781 [2024-12-05 12:25:09.414248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.781 [2024-12-05 12:25:09.414256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:38.781 [2024-12-05 12:25:09.414263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:38.781 [2024-12-05 12:25:09.414269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.781 [2024-12-05 12:25:09.414288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.781 [2024-12-05 12:25:09.414295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:38.781 [2024-12-05 12:25:09.414302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:38.781 [2024-12-05 12:25:09.414308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.781 [2024-12-05 12:25:09.414328] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:38.781 [2024-12-05 12:25:09.417270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.781 [2024-12-05 12:25:09.417290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:38.781 [2024-12-05 12:25:09.417298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.947 ms 00:21:38.781 [2024-12-05 12:25:09.417304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.781 [2024-12-05 12:25:09.417334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.781 [2024-12-05 12:25:09.417341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:38.781 [2024-12-05 12:25:09.417347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:38.781 [2024-12-05 12:25:09.417353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.781 [2024-12-05 12:25:09.417371] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:38.781 [2024-12-05 12:25:09.417387] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:38.781 [2024-12-05 12:25:09.417416] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:38.781 [2024-12-05 12:25:09.417429] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:38.781 [2024-12-05 12:25:09.417521] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:38.781 [2024-12-05 12:25:09.417531] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:38.781 [2024-12-05 12:25:09.417540] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:38.781 [2024-12-05 12:25:09.417551] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:38.781 [2024-12-05 12:25:09.417558] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:38.781 [2024-12-05 12:25:09.417565] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:38.781 [2024-12-05 12:25:09.417570] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:38.781 [2024-12-05 12:25:09.417576] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:38.781 [2024-12-05 12:25:09.417582] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:38.781 [2024-12-05 12:25:09.417588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.781 [2024-12-05 12:25:09.417594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:38.781 [2024-12-05 12:25:09.417600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:21:38.781 [2024-12-05 12:25:09.417605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.781 [2024-12-05 12:25:09.417672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.781 [2024-12-05 12:25:09.417684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:38.781 [2024-12-05 12:25:09.417691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:38.781 [2024-12-05 12:25:09.417696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.781 [2024-12-05 12:25:09.417776] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:38.781 [2024-12-05 12:25:09.417784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:38.781 [2024-12-05 12:25:09.417791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:38.781 [2024-12-05 12:25:09.417798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.781 [2024-12-05 12:25:09.417807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:38.781 [2024-12-05 12:25:09.417813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:38.781 [2024-12-05 12:25:09.417819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:38.781 [2024-12-05 12:25:09.417825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:38.781 [2024-12-05 12:25:09.417830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:38.781 [2024-12-05 12:25:09.417836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:38.781 [2024-12-05 12:25:09.417841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:38.781 [2024-12-05 12:25:09.417853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:38.781 [2024-12-05 12:25:09.417858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:38.781 [2024-12-05 12:25:09.417864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:38.781 [2024-12-05 12:25:09.417870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:38.781 [2024-12-05 12:25:09.417876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.781 [2024-12-05 12:25:09.417882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:38.781 [2024-12-05 12:25:09.417887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:38.781 [2024-12-05 12:25:09.417892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.781 [2024-12-05 12:25:09.417898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:38.781 [2024-12-05 12:25:09.417903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:38.781 [2024-12-05 12:25:09.417909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.781 [2024-12-05 12:25:09.417915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:38.781 [2024-12-05 12:25:09.417921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:38.781 [2024-12-05 12:25:09.417926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.781 [2024-12-05 12:25:09.417931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:38.781 [2024-12-05 12:25:09.417937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:38.781 [2024-12-05 12:25:09.417942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.781 [2024-12-05 12:25:09.417947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:38.781 [2024-12-05 12:25:09.417952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:38.781 [2024-12-05 12:25:09.417957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.781 [2024-12-05 12:25:09.417962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:38.781 [2024-12-05 12:25:09.417967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:38.781 [2024-12-05 12:25:09.417972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:38.781 [2024-12-05 12:25:09.417978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:38.781 [2024-12-05 12:25:09.417983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:38.781 [2024-12-05 12:25:09.417988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:38.781 [2024-12-05 12:25:09.417994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:38.781 [2024-12-05 12:25:09.418000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:38.781 [2024-12-05 12:25:09.418005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.782 [2024-12-05 12:25:09.418010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:38.782 [2024-12-05 12:25:09.418016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:38.782 [2024-12-05 12:25:09.418021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.782 [2024-12-05 12:25:09.418026] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:38.782 [2024-12-05 12:25:09.418033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:38.782 [2024-12-05 12:25:09.418041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:38.782 [2024-12-05 12:25:09.418047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.782 [2024-12-05 12:25:09.418053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:38.782 [2024-12-05 12:25:09.418059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:38.782 [2024-12-05 12:25:09.418064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:38.782 [2024-12-05 12:25:09.418069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:38.782 [2024-12-05 12:25:09.418074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:38.782 [2024-12-05 12:25:09.418080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:38.782 [2024-12-05 12:25:09.418087] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:38.782 [2024-12-05 12:25:09.418093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:38.782 [2024-12-05 12:25:09.418099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:38.782 [2024-12-05 12:25:09.418105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:38.782 [2024-12-05 12:25:09.418110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:38.782 [2024-12-05 12:25:09.418116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:38.782 [2024-12-05 12:25:09.418121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:38.782 [2024-12-05 12:25:09.418127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:38.782 [2024-12-05 12:25:09.418132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:38.782 [2024-12-05 12:25:09.418137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:38.782 [2024-12-05 12:25:09.418142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:38.782 [2024-12-05 12:25:09.418147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:38.782 [2024-12-05 12:25:09.418153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:38.782 [2024-12-05 12:25:09.418159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:38.782 [2024-12-05 12:25:09.418164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:38.782 [2024-12-05 12:25:09.418170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:38.782 [2024-12-05 12:25:09.418176] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:38.782 [2024-12-05 12:25:09.418182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:38.782 [2024-12-05 12:25:09.418188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:38.782 [2024-12-05 12:25:09.418194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:38.782 [2024-12-05 12:25:09.418199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:38.782 [2024-12-05 12:25:09.418205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:38.782 [2024-12-05 12:25:09.418211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.418219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:38.782 [2024-12-05 12:25:09.418225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:21:38.782 [2024-12-05 12:25:09.418230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.442651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.442676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:38.782 [2024-12-05 12:25:09.442685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.358 ms 00:21:38.782 [2024-12-05 12:25:09.442693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.442791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.442799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:38.782 [2024-12-05 12:25:09.442805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:38.782 [2024-12-05 12:25:09.442811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.483505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.483536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:38.782 [2024-12-05 12:25:09.483546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.676 ms 00:21:38.782 [2024-12-05 12:25:09.483553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.483617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.483626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:38.782 [2024-12-05 12:25:09.483633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:38.782 [2024-12-05 12:25:09.483640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.484047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.484068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:38.782 [2024-12-05 12:25:09.484079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:21:38.782 [2024-12-05 12:25:09.484086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.484207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.484216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:38.782 [2024-12-05 12:25:09.484224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:38.782 [2024-12-05 12:25:09.484230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.496639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.496661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:38.782 [2024-12-05 12:25:09.496669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.373 ms 00:21:38.782 [2024-12-05 12:25:09.496676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.507175] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:38.782 [2024-12-05 12:25:09.507199] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:38.782 [2024-12-05 12:25:09.507209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.507216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:38.782 [2024-12-05 12:25:09.507223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.454 ms 00:21:38.782 [2024-12-05 12:25:09.507229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.525852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.525876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:38.782 [2024-12-05 12:25:09.525885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.564 ms 00:21:38.782 [2024-12-05 12:25:09.525892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.535372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.535393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:38.782 [2024-12-05 12:25:09.535401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.418 ms 00:21:38.782 [2024-12-05 12:25:09.535407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.544144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.544165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:38.782 [2024-12-05 12:25:09.544173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.695 ms 00:21:38.782 [2024-12-05 12:25:09.544178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.544661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.544678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:38.782 [2024-12-05 12:25:09.544686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:21:38.782 [2024-12-05 12:25:09.544693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.592527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.592560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:38.782 [2024-12-05 12:25:09.592572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.816 ms 00:21:38.782 [2024-12-05 12:25:09.592579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.600573] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:38.782 [2024-12-05 12:25:09.615308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.615333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:38.782 [2024-12-05 12:25:09.615348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.650 ms 00:21:38.782 [2024-12-05 12:25:09.615354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.615425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.615434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:38.782 [2024-12-05 12:25:09.615441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:38.782 [2024-12-05 12:25:09.615447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.615506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.615515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:38.782 [2024-12-05 12:25:09.615525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:38.782 [2024-12-05 12:25:09.615533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.615558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.615565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:38.782 [2024-12-05 12:25:09.615572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:38.782 [2024-12-05 12:25:09.615578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.615608] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:38.782 [2024-12-05 12:25:09.615617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.615623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:38.782 [2024-12-05 12:25:09.615629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:38.782 [2024-12-05 12:25:09.615635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.634831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.634855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:38.782 [2024-12-05 12:25:09.634865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.176 ms 00:21:38.782 [2024-12-05 12:25:09.634871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.634949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.782 [2024-12-05 12:25:09.634959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:38.782 [2024-12-05 12:25:09.634966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:38.782 [2024-12-05 12:25:09.634975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.782 [2024-12-05 12:25:09.635834] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:38.782 [2024-12-05 12:25:09.638166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 242.683 ms, result 0 00:21:38.782 [2024-12-05 12:25:09.639314] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:39.042 [2024-12-05 12:25:09.650130] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:39.983  [2024-12-05T12:25:11.791Z] Copying: 14/256 [MB] (14 MBps) [2024-12-05T12:25:12.726Z] Copying: 25/256 [MB] (11 MBps) [2024-12-05T12:25:13.666Z] Copying: 40/256 [MB] (14 MBps) [2024-12-05T12:25:15.046Z] Copying: 54/256 [MB] (13 MBps) [2024-12-05T12:25:15.985Z] Copying: 73/256 [MB] (19 MBps) [2024-12-05T12:25:16.920Z] Copying: 85/256 [MB] (12 MBps) [2024-12-05T12:25:17.860Z] Copying: 100/256 [MB] (15 MBps) [2024-12-05T12:25:18.799Z] Copying: 114/256 [MB] (13 MBps) [2024-12-05T12:25:19.743Z] Copying: 127/256 [MB] (13 MBps) [2024-12-05T12:25:20.686Z] Copying: 146/256 [MB] (18 MBps) [2024-12-05T12:25:22.064Z] Copying: 160/256 [MB] (14 MBps) [2024-12-05T12:25:22.996Z] Copying: 179/256 [MB] (18 MBps) [2024-12-05T12:25:23.927Z] Copying: 195/256 [MB] (15 MBps) [2024-12-05T12:25:24.865Z] Copying: 210/256 [MB] (15 MBps) [2024-12-05T12:25:25.802Z] Copying: 222/256 [MB] (11 MBps) [2024-12-05T12:25:26.740Z] Copying: 238/256 [MB] (15 MBps) [2024-12-05T12:25:27.306Z] Copying: 249/256 [MB] (10 MBps) [2024-12-05T12:25:27.307Z] Copying: 256/256 [MB] (average 14 MBps)[2024-12-05 12:25:27.104201] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:56.438 [2024-12-05 12:25:27.111849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.438 [2024-12-05 12:25:27.111880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:56.438 [2024-12-05 12:25:27.111897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:56.438 [2024-12-05 12:25:27.111903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.438 [2024-12-05 12:25:27.111921] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:56.438 [2024-12-05 12:25:27.114180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.438 [2024-12-05 12:25:27.114314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:56.438 [2024-12-05 12:25:27.114330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.249 ms 00:21:56.438 [2024-12-05 12:25:27.114336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.438 [2024-12-05 12:25:27.114553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.438 [2024-12-05 12:25:27.114561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:56.438 [2024-12-05 12:25:27.114568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:21:56.438 [2024-12-05 12:25:27.114575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.438 [2024-12-05 12:25:27.117377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.438 [2024-12-05 12:25:27.117470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:56.438 [2024-12-05 12:25:27.117482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.787 ms 00:21:56.438 [2024-12-05 12:25:27.117489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.438 [2024-12-05 12:25:27.122737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.438 [2024-12-05 12:25:27.122759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:56.438 [2024-12-05 12:25:27.122766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.232 ms 00:21:56.438 [2024-12-05 12:25:27.122773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.438 [2024-12-05 12:25:27.141209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.438 [2024-12-05 12:25:27.141318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:56.438 [2024-12-05 12:25:27.141332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.392 ms 00:21:56.438 [2024-12-05 12:25:27.141338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.438 [2024-12-05 12:25:27.153136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.438 [2024-12-05 12:25:27.153179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:56.438 [2024-12-05 12:25:27.153189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.771 ms 00:21:56.438 [2024-12-05 12:25:27.153195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.438 [2024-12-05 12:25:27.153290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.438 [2024-12-05 12:25:27.153298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:56.438 [2024-12-05 12:25:27.153312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:56.438 [2024-12-05 12:25:27.153318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.438 [2024-12-05 12:25:27.170866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.438 [2024-12-05 12:25:27.170890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:56.438 [2024-12-05 12:25:27.170897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.536 ms 00:21:56.438 [2024-12-05 12:25:27.170903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.438 [2024-12-05 12:25:27.188727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.438 [2024-12-05 12:25:27.188757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:56.438 [2024-12-05 12:25:27.188765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.789 ms 00:21:56.438 [2024-12-05 12:25:27.188770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.438 [2024-12-05 12:25:27.206219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.438 [2024-12-05 12:25:27.206242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:56.438 [2024-12-05 12:25:27.206250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.422 ms 00:21:56.438 [2024-12-05 12:25:27.206255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.438 [2024-12-05 12:25:27.223325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.438 [2024-12-05 12:25:27.223348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:56.438 [2024-12-05 12:25:27.223356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.023 ms 00:21:56.438 [2024-12-05 12:25:27.223361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.438 [2024-12-05 12:25:27.223388] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:56.438 [2024-12-05 12:25:27.223399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:56.438 [2024-12-05 12:25:27.223705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.223995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.224000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.224006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.224012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.224017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:56.439 [2024-12-05 12:25:27.224030] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:56.439 [2024-12-05 12:25:27.224039] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b202c494-44a1-46c8-8ff5-771af2981a3d 00:21:56.439 [2024-12-05 12:25:27.224045] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:56.439 [2024-12-05 12:25:27.224051] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:56.439 [2024-12-05 12:25:27.224057] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:56.439 [2024-12-05 12:25:27.224063] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:56.439 [2024-12-05 12:25:27.224068] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:56.439 [2024-12-05 12:25:27.224077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:56.439 [2024-12-05 12:25:27.224082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:56.439 [2024-12-05 12:25:27.224087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:56.439 [2024-12-05 12:25:27.224091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:56.439 [2024-12-05 12:25:27.224097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.439 [2024-12-05 12:25:27.224102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:56.439 [2024-12-05 12:25:27.224109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:21:56.439 [2024-12-05 12:25:27.224115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.439 [2024-12-05 12:25:27.234120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.439 [2024-12-05 12:25:27.234143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:56.439 [2024-12-05 12:25:27.234151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.991 ms 00:21:56.439 [2024-12-05 12:25:27.234160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.439 [2024-12-05 12:25:27.234455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.439 [2024-12-05 12:25:27.234478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:56.439 [2024-12-05 12:25:27.234486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:21:56.439 [2024-12-05 12:25:27.234492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.439 [2024-12-05 12:25:27.263798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.439 [2024-12-05 12:25:27.263824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:56.439 [2024-12-05 12:25:27.263835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.439 [2024-12-05 12:25:27.263842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.439 [2024-12-05 12:25:27.263905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.439 [2024-12-05 12:25:27.263912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:56.439 [2024-12-05 12:25:27.263918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.439 [2024-12-05 12:25:27.263924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.439 [2024-12-05 12:25:27.263957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.439 [2024-12-05 12:25:27.263965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:56.439 [2024-12-05 12:25:27.263971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.439 [2024-12-05 12:25:27.263977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.439 [2024-12-05 12:25:27.263994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.439 [2024-12-05 12:25:27.264002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:56.439 [2024-12-05 12:25:27.264008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.439 [2024-12-05 12:25:27.264014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.698 [2024-12-05 12:25:27.327532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.698 [2024-12-05 12:25:27.327566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:56.698 [2024-12-05 12:25:27.327576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.698 [2024-12-05 12:25:27.327586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.698 [2024-12-05 12:25:27.379083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.698 [2024-12-05 12:25:27.379119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:56.698 [2024-12-05 12:25:27.379128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.698 [2024-12-05 12:25:27.379134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.698 [2024-12-05 12:25:27.379180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.698 [2024-12-05 12:25:27.379187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:56.698 [2024-12-05 12:25:27.379194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.698 [2024-12-05 12:25:27.379200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.698 [2024-12-05 12:25:27.379229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.698 [2024-12-05 12:25:27.379236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:56.698 [2024-12-05 12:25:27.379242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.698 [2024-12-05 12:25:27.379249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.698 [2024-12-05 12:25:27.379324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.698 [2024-12-05 12:25:27.379332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:56.698 [2024-12-05 12:25:27.379339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.698 [2024-12-05 12:25:27.379345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.698 [2024-12-05 12:25:27.379372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.698 [2024-12-05 12:25:27.379382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:56.698 [2024-12-05 12:25:27.379389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.698 [2024-12-05 12:25:27.379395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.698 [2024-12-05 12:25:27.379431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.698 [2024-12-05 12:25:27.379438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:56.698 [2024-12-05 12:25:27.379445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.698 [2024-12-05 12:25:27.379451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.698 [2024-12-05 12:25:27.379510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.698 [2024-12-05 12:25:27.379519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:56.698 [2024-12-05 12:25:27.379526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.698 [2024-12-05 12:25:27.379532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.698 [2024-12-05 12:25:27.379662] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 267.791 ms, result 0 00:21:57.267 00:21:57.267 00:21:57.267 12:25:27 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:21:57.267 12:25:27 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:57.839 12:25:28 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:57.839 [2024-12-05 12:25:28.581395] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:21:57.839 [2024-12-05 12:25:28.581673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77355 ] 00:21:58.097 [2024-12-05 12:25:28.735695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.097 [2024-12-05 12:25:28.831199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.356 [2024-12-05 12:25:29.063557] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:58.356 [2024-12-05 12:25:29.063753] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:58.356 [2024-12-05 12:25:29.219819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.356 [2024-12-05 12:25:29.219960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:58.356 [2024-12-05 12:25:29.220017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:58.356 [2024-12-05 12:25:29.220037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.356 [2024-12-05 12:25:29.222288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.356 [2024-12-05 12:25:29.222389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:58.356 [2024-12-05 12:25:29.222437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.222 ms 00:21:58.356 [2024-12-05 12:25:29.222455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.356 [2024-12-05 12:25:29.222615] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:58.356 [2024-12-05 12:25:29.223185] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:58.356 [2024-12-05 12:25:29.223276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.356 [2024-12-05 12:25:29.223317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:58.356 [2024-12-05 12:25:29.223336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:21:58.356 [2024-12-05 12:25:29.223351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.616 [2024-12-05 12:25:29.224684] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:58.616 [2024-12-05 12:25:29.234911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.616 [2024-12-05 12:25:29.235003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:58.616 [2024-12-05 12:25:29.235049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.229 ms 00:21:58.616 [2024-12-05 12:25:29.235066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.616 [2024-12-05 12:25:29.235142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.616 [2024-12-05 12:25:29.235165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:58.616 [2024-12-05 12:25:29.235181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:58.616 [2024-12-05 12:25:29.235197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.616 [2024-12-05 12:25:29.241418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.616 [2024-12-05 12:25:29.241522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:58.616 [2024-12-05 12:25:29.241568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.177 ms 00:21:58.616 [2024-12-05 12:25:29.241585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.616 [2024-12-05 12:25:29.241685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.616 [2024-12-05 12:25:29.241705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:58.616 [2024-12-05 12:25:29.241720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:58.616 [2024-12-05 12:25:29.241735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.616 [2024-12-05 12:25:29.241767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.616 [2024-12-05 12:25:29.241950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:58.616 [2024-12-05 12:25:29.241976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:58.616 [2024-12-05 12:25:29.241992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.616 [2024-12-05 12:25:29.242056] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:58.616 [2024-12-05 12:25:29.244983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.616 [2024-12-05 12:25:29.245062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:58.616 [2024-12-05 12:25:29.245100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.933 ms 00:21:58.616 [2024-12-05 12:25:29.245117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.616 [2024-12-05 12:25:29.245178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.616 [2024-12-05 12:25:29.245197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:58.616 [2024-12-05 12:25:29.245213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:58.616 [2024-12-05 12:25:29.245228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.616 [2024-12-05 12:25:29.245259] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:58.616 [2024-12-05 12:25:29.245286] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:58.616 [2024-12-05 12:25:29.245371] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:58.616 [2024-12-05 12:25:29.245403] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:58.616 [2024-12-05 12:25:29.245543] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:58.616 [2024-12-05 12:25:29.245667] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:58.616 [2024-12-05 12:25:29.245693] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:58.616 [2024-12-05 12:25:29.245723] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:58.616 [2024-12-05 12:25:29.245768] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:58.616 [2024-12-05 12:25:29.245792] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:58.616 [2024-12-05 12:25:29.245808] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:58.616 [2024-12-05 12:25:29.245823] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:58.616 [2024-12-05 12:25:29.245935] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:58.616 [2024-12-05 12:25:29.245956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.616 [2024-12-05 12:25:29.245971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:58.616 [2024-12-05 12:25:29.245986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:21:58.616 [2024-12-05 12:25:29.246002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.616 [2024-12-05 12:25:29.246094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.616 [2024-12-05 12:25:29.246151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:58.616 [2024-12-05 12:25:29.246169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:58.616 [2024-12-05 12:25:29.246184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.616 [2024-12-05 12:25:29.246278] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:58.616 [2024-12-05 12:25:29.246339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:58.616 [2024-12-05 12:25:29.246356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:58.616 [2024-12-05 12:25:29.246392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.616 [2024-12-05 12:25:29.246411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:58.616 [2024-12-05 12:25:29.246425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:58.616 [2024-12-05 12:25:29.246439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:58.616 [2024-12-05 12:25:29.246489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:58.616 [2024-12-05 12:25:29.246508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:58.616 [2024-12-05 12:25:29.246522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:58.616 [2024-12-05 12:25:29.246537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:58.616 [2024-12-05 12:25:29.246557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:58.616 [2024-12-05 12:25:29.246647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:58.616 [2024-12-05 12:25:29.246664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:58.616 [2024-12-05 12:25:29.246679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:58.616 [2024-12-05 12:25:29.246693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.616 [2024-12-05 12:25:29.246708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:58.616 [2024-12-05 12:25:29.246722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:58.616 [2024-12-05 12:25:29.246736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.616 [2024-12-05 12:25:29.246751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:58.616 [2024-12-05 12:25:29.246765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:58.616 [2024-12-05 12:25:29.246779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.616 [2024-12-05 12:25:29.246793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:58.616 [2024-12-05 12:25:29.246808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:58.616 [2024-12-05 12:25:29.246822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.616 [2024-12-05 12:25:29.246836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:58.616 [2024-12-05 12:25:29.246888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:58.616 [2024-12-05 12:25:29.246907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.617 [2024-12-05 12:25:29.246921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:58.617 [2024-12-05 12:25:29.246934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:58.617 [2024-12-05 12:25:29.246949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.617 [2024-12-05 12:25:29.246963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:58.617 [2024-12-05 12:25:29.246977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:58.617 [2024-12-05 12:25:29.246991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:58.617 [2024-12-05 12:25:29.247005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:58.617 [2024-12-05 12:25:29.247020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:58.617 [2024-12-05 12:25:29.247034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:58.617 [2024-12-05 12:25:29.247047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:58.617 [2024-12-05 12:25:29.247062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:58.617 [2024-12-05 12:25:29.247075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.617 [2024-12-05 12:25:29.247118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:58.617 [2024-12-05 12:25:29.247136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:58.617 [2024-12-05 12:25:29.247151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.617 [2024-12-05 12:25:29.247166] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:58.617 [2024-12-05 12:25:29.247197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:58.617 [2024-12-05 12:25:29.247208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:58.617 [2024-12-05 12:25:29.247215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.617 [2024-12-05 12:25:29.247222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:58.617 [2024-12-05 12:25:29.247228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:58.617 [2024-12-05 12:25:29.247234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:58.617 [2024-12-05 12:25:29.247239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:58.617 [2024-12-05 12:25:29.247244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:58.617 [2024-12-05 12:25:29.247249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:58.617 [2024-12-05 12:25:29.247257] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:58.617 [2024-12-05 12:25:29.247265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:58.617 [2024-12-05 12:25:29.247272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:58.617 [2024-12-05 12:25:29.247277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:58.617 [2024-12-05 12:25:29.247283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:58.617 [2024-12-05 12:25:29.247289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:58.617 [2024-12-05 12:25:29.247295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:58.617 [2024-12-05 12:25:29.247301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:58.617 [2024-12-05 12:25:29.247307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:58.617 [2024-12-05 12:25:29.247312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:58.617 [2024-12-05 12:25:29.247318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:58.617 [2024-12-05 12:25:29.247323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:58.617 [2024-12-05 12:25:29.247329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:58.617 [2024-12-05 12:25:29.247334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:58.617 [2024-12-05 12:25:29.247339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:58.617 [2024-12-05 12:25:29.247346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:58.617 [2024-12-05 12:25:29.247352] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:58.617 [2024-12-05 12:25:29.247358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:58.617 [2024-12-05 12:25:29.247364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:58.617 [2024-12-05 12:25:29.247369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:58.617 [2024-12-05 12:25:29.247375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:58.617 [2024-12-05 12:25:29.247381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:58.617 [2024-12-05 12:25:29.247389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.617 [2024-12-05 12:25:29.247397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:58.617 [2024-12-05 12:25:29.247404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.164 ms 00:21:58.617 [2024-12-05 12:25:29.247410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.617 [2024-12-05 12:25:29.271757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.617 [2024-12-05 12:25:29.271851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:58.617 [2024-12-05 12:25:29.271891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.300 ms 00:21:58.617 [2024-12-05 12:25:29.271909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.617 [2024-12-05 12:25:29.272016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.617 [2024-12-05 12:25:29.272037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:58.617 [2024-12-05 12:25:29.272053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:58.617 [2024-12-05 12:25:29.272067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.617 [2024-12-05 12:25:29.311569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.617 [2024-12-05 12:25:29.311679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:58.617 [2024-12-05 12:25:29.311728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.476 ms 00:21:58.617 [2024-12-05 12:25:29.311746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.617 [2024-12-05 12:25:29.311832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.617 [2024-12-05 12:25:29.311855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:58.617 [2024-12-05 12:25:29.311871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:58.617 [2024-12-05 12:25:29.311886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.617 [2024-12-05 12:25:29.312291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.617 [2024-12-05 12:25:29.312365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:58.617 [2024-12-05 12:25:29.312411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:21:58.617 [2024-12-05 12:25:29.312428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.617 [2024-12-05 12:25:29.312564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.617 [2024-12-05 12:25:29.312585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:58.617 [2024-12-05 12:25:29.312601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:58.617 [2024-12-05 12:25:29.312616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.617 [2024-12-05 12:25:29.324854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.617 [2024-12-05 12:25:29.324938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:58.617 [2024-12-05 12:25:29.324976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.213 ms 00:21:58.617 [2024-12-05 12:25:29.324994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.617 [2024-12-05 12:25:29.335812] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:58.617 [2024-12-05 12:25:29.335911] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:58.617 [2024-12-05 12:25:29.335957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.617 [2024-12-05 12:25:29.335974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:58.617 [2024-12-05 12:25:29.335989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.872 ms 00:21:58.617 [2024-12-05 12:25:29.336004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.617 [2024-12-05 12:25:29.354768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.617 [2024-12-05 12:25:29.354858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:58.617 [2024-12-05 12:25:29.354899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.640 ms 00:21:58.617 [2024-12-05 12:25:29.354917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.617 [2024-12-05 12:25:29.364034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.617 [2024-12-05 12:25:29.364117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:58.617 [2024-12-05 12:25:29.364155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.056 ms 00:21:58.617 [2024-12-05 12:25:29.364172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.617 [2024-12-05 12:25:29.373192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.617 [2024-12-05 12:25:29.373282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:58.617 [2024-12-05 12:25:29.373320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.973 ms 00:21:58.617 [2024-12-05 12:25:29.373337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.617 [2024-12-05 12:25:29.373819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.618 [2024-12-05 12:25:29.373854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:58.618 [2024-12-05 12:25:29.373907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:21:58.618 [2024-12-05 12:25:29.373952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.618 [2024-12-05 12:25:29.422662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.618 [2024-12-05 12:25:29.422771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:58.618 [2024-12-05 12:25:29.422812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.677 ms 00:21:58.618 [2024-12-05 12:25:29.422830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.618 [2024-12-05 12:25:29.431157] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:58.618 [2024-12-05 12:25:29.445636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.618 [2024-12-05 12:25:29.445733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:58.618 [2024-12-05 12:25:29.445772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.725 ms 00:21:58.618 [2024-12-05 12:25:29.445796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.618 [2024-12-05 12:25:29.445878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.618 [2024-12-05 12:25:29.445953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:58.618 [2024-12-05 12:25:29.445976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:58.618 [2024-12-05 12:25:29.445992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.618 [2024-12-05 12:25:29.446073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.618 [2024-12-05 12:25:29.446308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:58.618 [2024-12-05 12:25:29.446386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:58.618 [2024-12-05 12:25:29.446415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.618 [2024-12-05 12:25:29.446500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.618 [2024-12-05 12:25:29.446560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:58.618 [2024-12-05 12:25:29.446601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:58.618 [2024-12-05 12:25:29.446618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.618 [2024-12-05 12:25:29.446662] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:58.618 [2024-12-05 12:25:29.446711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.618 [2024-12-05 12:25:29.446730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:58.618 [2024-12-05 12:25:29.446747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:58.618 [2024-12-05 12:25:29.446763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.618 [2024-12-05 12:25:29.465525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.618 [2024-12-05 12:25:29.465617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:58.618 [2024-12-05 12:25:29.465659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.697 ms 00:21:58.618 [2024-12-05 12:25:29.465678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.618 [2024-12-05 12:25:29.466175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.618 [2024-12-05 12:25:29.466229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:58.618 [2024-12-05 12:25:29.466316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:58.618 [2024-12-05 12:25:29.466336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.618 [2024-12-05 12:25:29.467181] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:58.618 [2024-12-05 12:25:29.469653] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 247.112 ms, result 0 00:21:58.618 [2024-12-05 12:25:29.470601] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:58.618 [2024-12-05 12:25:29.481330] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:59.185  [2024-12-05T12:25:30.054Z] Copying: 4096/4096 [kB] (average 14 MBps)[2024-12-05 12:25:29.756056] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:59.185 [2024-12-05 12:25:29.762705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.185 [2024-12-05 12:25:29.762804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:59.185 [2024-12-05 12:25:29.762820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:59.185 [2024-12-05 12:25:29.762827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-12-05 12:25:29.762845] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:59.185 [2024-12-05 12:25:29.765030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.185 [2024-12-05 12:25:29.765051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:59.185 [2024-12-05 12:25:29.765060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.175 ms 00:21:59.185 [2024-12-05 12:25:29.765067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-12-05 12:25:29.767658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.185 [2024-12-05 12:25:29.767684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:59.185 [2024-12-05 12:25:29.767692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:21:59.185 [2024-12-05 12:25:29.767698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-12-05 12:25:29.771186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.185 [2024-12-05 12:25:29.771208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:59.185 [2024-12-05 12:25:29.771216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.472 ms 00:21:59.185 [2024-12-05 12:25:29.771222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-12-05 12:25:29.776520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.185 [2024-12-05 12:25:29.776541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:59.185 [2024-12-05 12:25:29.776549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.277 ms 00:21:59.185 [2024-12-05 12:25:29.776555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-12-05 12:25:29.793960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.185 [2024-12-05 12:25:29.794062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:59.185 [2024-12-05 12:25:29.794075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.362 ms 00:21:59.185 [2024-12-05 12:25:29.794081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-12-05 12:25:29.806524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.185 [2024-12-05 12:25:29.806552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:59.185 [2024-12-05 12:25:29.806561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.419 ms 00:21:59.185 [2024-12-05 12:25:29.806568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-12-05 12:25:29.806664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.185 [2024-12-05 12:25:29.806672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:59.185 [2024-12-05 12:25:29.806685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:59.185 [2024-12-05 12:25:29.806691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-12-05 12:25:29.824686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.185 [2024-12-05 12:25:29.824787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:59.185 [2024-12-05 12:25:29.824799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.983 ms 00:21:59.185 [2024-12-05 12:25:29.824805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-12-05 12:25:29.842604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.185 [2024-12-05 12:25:29.842626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:59.185 [2024-12-05 12:25:29.842634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.754 ms 00:21:59.185 [2024-12-05 12:25:29.842639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-12-05 12:25:29.859543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.185 [2024-12-05 12:25:29.859635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:59.185 [2024-12-05 12:25:29.859646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.877 ms 00:21:59.185 [2024-12-05 12:25:29.859652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.185 [2024-12-05 12:25:29.876937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.185 [2024-12-05 12:25:29.876960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:59.186 [2024-12-05 12:25:29.876967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.218 ms 00:21:59.186 [2024-12-05 12:25:29.876972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.186 [2024-12-05 12:25:29.876999] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:59.186 [2024-12-05 12:25:29.877011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:59.186 [2024-12-05 12:25:29.877380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:59.187 [2024-12-05 12:25:29.877627] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:59.187 [2024-12-05 12:25:29.877633] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b202c494-44a1-46c8-8ff5-771af2981a3d 00:21:59.187 [2024-12-05 12:25:29.877651] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:59.187 [2024-12-05 12:25:29.877657] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:59.187 [2024-12-05 12:25:29.877663] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:59.187 [2024-12-05 12:25:29.877669] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:59.187 [2024-12-05 12:25:29.877676] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:59.187 [2024-12-05 12:25:29.877682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:59.187 [2024-12-05 12:25:29.877689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:59.187 [2024-12-05 12:25:29.877694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:59.187 [2024-12-05 12:25:29.877699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:59.187 [2024-12-05 12:25:29.877705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.187 [2024-12-05 12:25:29.877711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:59.187 [2024-12-05 12:25:29.877720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:21:59.187 [2024-12-05 12:25:29.877727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.187 [2024-12-05 12:25:29.887475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.187 [2024-12-05 12:25:29.887497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:59.187 [2024-12-05 12:25:29.887505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.735 ms 00:21:59.187 [2024-12-05 12:25:29.887511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.187 [2024-12-05 12:25:29.887809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.187 [2024-12-05 12:25:29.887822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:59.187 [2024-12-05 12:25:29.887828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:21:59.187 [2024-12-05 12:25:29.887834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.187 [2024-12-05 12:25:29.916929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.187 [2024-12-05 12:25:29.916955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:59.187 [2024-12-05 12:25:29.916963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.187 [2024-12-05 12:25:29.916972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.187 [2024-12-05 12:25:29.917033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.187 [2024-12-05 12:25:29.917040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:59.187 [2024-12-05 12:25:29.917046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.187 [2024-12-05 12:25:29.917052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.187 [2024-12-05 12:25:29.917091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.187 [2024-12-05 12:25:29.917098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:59.187 [2024-12-05 12:25:29.917104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.188 [2024-12-05 12:25:29.917110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.188 [2024-12-05 12:25:29.917128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.188 [2024-12-05 12:25:29.917134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:59.188 [2024-12-05 12:25:29.917141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.188 [2024-12-05 12:25:29.917147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.188 [2024-12-05 12:25:29.979088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.188 [2024-12-05 12:25:29.979219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:59.188 [2024-12-05 12:25:29.979234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.188 [2024-12-05 12:25:29.979246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.188 [2024-12-05 12:25:30.030661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.188 [2024-12-05 12:25:30.030696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:59.188 [2024-12-05 12:25:30.030706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.188 [2024-12-05 12:25:30.030713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.188 [2024-12-05 12:25:30.030761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.188 [2024-12-05 12:25:30.030769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:59.188 [2024-12-05 12:25:30.030776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.188 [2024-12-05 12:25:30.030783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.188 [2024-12-05 12:25:30.030809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.188 [2024-12-05 12:25:30.030820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:59.188 [2024-12-05 12:25:30.030827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.188 [2024-12-05 12:25:30.030834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.188 [2024-12-05 12:25:30.030909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.188 [2024-12-05 12:25:30.030918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:59.188 [2024-12-05 12:25:30.030925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.188 [2024-12-05 12:25:30.030931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.188 [2024-12-05 12:25:30.030959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.188 [2024-12-05 12:25:30.030967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:59.188 [2024-12-05 12:25:30.030976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.188 [2024-12-05 12:25:30.030983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.188 [2024-12-05 12:25:30.031020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.188 [2024-12-05 12:25:30.031027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:59.188 [2024-12-05 12:25:30.031034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.188 [2024-12-05 12:25:30.031041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.188 [2024-12-05 12:25:30.031081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.188 [2024-12-05 12:25:30.031091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:59.188 [2024-12-05 12:25:30.031097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.188 [2024-12-05 12:25:30.031104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.188 [2024-12-05 12:25:30.031234] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 268.510 ms, result 0 00:21:59.755 00:21:59.755 00:22:00.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.014 12:25:30 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77380 00:22:00.014 12:25:30 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77380 00:22:00.014 12:25:30 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:00.014 12:25:30 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77380 ']' 00:22:00.014 12:25:30 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.014 12:25:30 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.014 12:25:30 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.014 12:25:30 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.014 12:25:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:00.014 [2024-12-05 12:25:30.708827] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:22:00.014 [2024-12-05 12:25:30.709146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77380 ] 00:22:00.014 [2024-12-05 12:25:30.864800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.272 [2024-12-05 12:25:30.978856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.837 12:25:31 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.837 12:25:31 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:00.837 12:25:31 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:01.138 [2024-12-05 12:25:31.744563] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:01.138 [2024-12-05 12:25:31.744615] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:01.138 [2024-12-05 12:25:31.916855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.138 [2024-12-05 12:25:31.916888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:01.138 [2024-12-05 12:25:31.916901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:01.138 [2024-12-05 12:25:31.916908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.138 [2024-12-05 12:25:31.919075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.138 [2024-12-05 12:25:31.919099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:01.138 [2024-12-05 12:25:31.919109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.152 ms 00:22:01.138 [2024-12-05 12:25:31.919115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.138 [2024-12-05 12:25:31.919173] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:01.138 [2024-12-05 12:25:31.919784] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:01.138 [2024-12-05 12:25:31.919810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.138 [2024-12-05 12:25:31.919816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:01.138 [2024-12-05 12:25:31.919825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:22:01.138 [2024-12-05 12:25:31.919832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.138 [2024-12-05 12:25:31.921192] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:01.138 [2024-12-05 12:25:31.931628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.138 [2024-12-05 12:25:31.931655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:01.138 [2024-12-05 12:25:31.931665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.440 ms 00:22:01.138 [2024-12-05 12:25:31.931673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.138 [2024-12-05 12:25:31.931739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.138 [2024-12-05 12:25:31.931750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:01.138 [2024-12-05 12:25:31.931757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:01.138 [2024-12-05 12:25:31.931764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.138 [2024-12-05 12:25:31.938052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.138 [2024-12-05 12:25:31.938077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:01.138 [2024-12-05 12:25:31.938085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.248 ms 00:22:01.138 [2024-12-05 12:25:31.938092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.138 [2024-12-05 12:25:31.938168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.138 [2024-12-05 12:25:31.938178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:01.138 [2024-12-05 12:25:31.938184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:01.138 [2024-12-05 12:25:31.938194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.138 [2024-12-05 12:25:31.938213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.138 [2024-12-05 12:25:31.938221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:01.138 [2024-12-05 12:25:31.938228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:01.138 [2024-12-05 12:25:31.938235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.138 [2024-12-05 12:25:31.938255] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:01.138 [2024-12-05 12:25:31.941359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.138 [2024-12-05 12:25:31.941378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:01.138 [2024-12-05 12:25:31.941387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.109 ms 00:22:01.138 [2024-12-05 12:25:31.941394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.138 [2024-12-05 12:25:31.941424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.138 [2024-12-05 12:25:31.941431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:01.138 [2024-12-05 12:25:31.941439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:01.138 [2024-12-05 12:25:31.941446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.138 [2024-12-05 12:25:31.941473] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:01.138 [2024-12-05 12:25:31.941489] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:01.138 [2024-12-05 12:25:31.941524] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:01.138 [2024-12-05 12:25:31.941536] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:01.138 [2024-12-05 12:25:31.941622] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:01.138 [2024-12-05 12:25:31.941630] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:01.138 [2024-12-05 12:25:31.941644] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:01.138 [2024-12-05 12:25:31.941653] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:01.138 [2024-12-05 12:25:31.941661] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:01.138 [2024-12-05 12:25:31.941668] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:01.138 [2024-12-05 12:25:31.941675] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:01.138 [2024-12-05 12:25:31.941682] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:01.138 [2024-12-05 12:25:31.941690] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:01.138 [2024-12-05 12:25:31.941697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.138 [2024-12-05 12:25:31.941704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:01.138 [2024-12-05 12:25:31.941711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:22:01.138 [2024-12-05 12:25:31.941726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.138 [2024-12-05 12:25:31.941805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.138 [2024-12-05 12:25:31.941814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:01.138 [2024-12-05 12:25:31.941820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:01.138 [2024-12-05 12:25:31.941828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.138 [2024-12-05 12:25:31.941905] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:01.138 [2024-12-05 12:25:31.941915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:01.138 [2024-12-05 12:25:31.941921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:01.138 [2024-12-05 12:25:31.941929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.138 [2024-12-05 12:25:31.941935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:01.138 [2024-12-05 12:25:31.941943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:01.138 [2024-12-05 12:25:31.941948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:01.138 [2024-12-05 12:25:31.941958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:01.138 [2024-12-05 12:25:31.941964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:01.138 [2024-12-05 12:25:31.941971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:01.138 [2024-12-05 12:25:31.941977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:01.138 [2024-12-05 12:25:31.941984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:01.138 [2024-12-05 12:25:31.941989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:01.138 [2024-12-05 12:25:31.941996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:01.138 [2024-12-05 12:25:31.942001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:01.138 [2024-12-05 12:25:31.942010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.138 [2024-12-05 12:25:31.942016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:01.138 [2024-12-05 12:25:31.942023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:01.138 [2024-12-05 12:25:31.942033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.138 [2024-12-05 12:25:31.942039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:01.138 [2024-12-05 12:25:31.942045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:01.138 [2024-12-05 12:25:31.942051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.138 [2024-12-05 12:25:31.942057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:01.138 [2024-12-05 12:25:31.942064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:01.138 [2024-12-05 12:25:31.942069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.138 [2024-12-05 12:25:31.942077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:01.138 [2024-12-05 12:25:31.942082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:01.138 [2024-12-05 12:25:31.942088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.138 [2024-12-05 12:25:31.942093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:01.139 [2024-12-05 12:25:31.942100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:01.139 [2024-12-05 12:25:31.942104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.139 [2024-12-05 12:25:31.942112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:01.139 [2024-12-05 12:25:31.942117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:01.139 [2024-12-05 12:25:31.942124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:01.139 [2024-12-05 12:25:31.942129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:01.139 [2024-12-05 12:25:31.942136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:01.139 [2024-12-05 12:25:31.942141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:01.139 [2024-12-05 12:25:31.942148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:01.139 [2024-12-05 12:25:31.942153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:01.139 [2024-12-05 12:25:31.942161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.139 [2024-12-05 12:25:31.942165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:01.139 [2024-12-05 12:25:31.942172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:01.139 [2024-12-05 12:25:31.942177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.139 [2024-12-05 12:25:31.942184] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:01.139 [2024-12-05 12:25:31.942192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:01.139 [2024-12-05 12:25:31.942199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:01.139 [2024-12-05 12:25:31.942205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.139 [2024-12-05 12:25:31.942214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:01.139 [2024-12-05 12:25:31.942219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:01.139 [2024-12-05 12:25:31.942227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:01.139 [2024-12-05 12:25:31.942232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:01.139 [2024-12-05 12:25:31.942239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:01.139 [2024-12-05 12:25:31.942244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:01.139 [2024-12-05 12:25:31.942252] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:01.139 [2024-12-05 12:25:31.942259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:01.139 [2024-12-05 12:25:31.942269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:01.139 [2024-12-05 12:25:31.942275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:01.139 [2024-12-05 12:25:31.942283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:01.139 [2024-12-05 12:25:31.942288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:01.139 [2024-12-05 12:25:31.942295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:01.139 [2024-12-05 12:25:31.942301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:01.139 [2024-12-05 12:25:31.942308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:01.139 [2024-12-05 12:25:31.942314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:01.139 [2024-12-05 12:25:31.942321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:01.139 [2024-12-05 12:25:31.942326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:01.139 [2024-12-05 12:25:31.942333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:01.139 [2024-12-05 12:25:31.942338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:01.139 [2024-12-05 12:25:31.942345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:01.139 [2024-12-05 12:25:31.942351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:01.139 [2024-12-05 12:25:31.942357] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:01.139 [2024-12-05 12:25:31.942363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:01.139 [2024-12-05 12:25:31.942373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:01.139 [2024-12-05 12:25:31.942379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:01.139 [2024-12-05 12:25:31.942386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:01.139 [2024-12-05 12:25:31.942391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:01.139 [2024-12-05 12:25:31.942398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.139 [2024-12-05 12:25:31.942404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:01.139 [2024-12-05 12:25:31.942412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:22:01.139 [2024-12-05 12:25:31.942419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:31.966573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-12-05 12:25:31.966596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:01.436 [2024-12-05 12:25:31.966607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.087 ms 00:22:01.436 [2024-12-05 12:25:31.966616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:31.966710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-12-05 12:25:31.966718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:01.436 [2024-12-05 12:25:31.966727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:01.436 [2024-12-05 12:25:31.966733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:31.992933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-12-05 12:25:31.992957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:01.436 [2024-12-05 12:25:31.992965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.180 ms 00:22:01.436 [2024-12-05 12:25:31.992972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:31.993019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-12-05 12:25:31.993027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:01.436 [2024-12-05 12:25:31.993035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:01.436 [2024-12-05 12:25:31.993041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:31.993443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-12-05 12:25:31.993471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:01.436 [2024-12-05 12:25:31.993482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:22:01.436 [2024-12-05 12:25:31.993488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:31.993605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-12-05 12:25:31.993613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:01.436 [2024-12-05 12:25:31.993622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:22:01.436 [2024-12-05 12:25:31.993628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:32.007015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-12-05 12:25:32.007035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:01.436 [2024-12-05 12:25:32.007045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.368 ms 00:22:01.436 [2024-12-05 12:25:32.007051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:32.032046] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:01.436 [2024-12-05 12:25:32.032076] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:01.436 [2024-12-05 12:25:32.032089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-12-05 12:25:32.032097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:01.436 [2024-12-05 12:25:32.032107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.961 ms 00:22:01.436 [2024-12-05 12:25:32.032118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:32.051252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-12-05 12:25:32.051277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:01.436 [2024-12-05 12:25:32.051288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.071 ms 00:22:01.436 [2024-12-05 12:25:32.051295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:32.060567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-12-05 12:25:32.060589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:01.436 [2024-12-05 12:25:32.060601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.210 ms 00:22:01.436 [2024-12-05 12:25:32.060607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:32.069527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-12-05 12:25:32.069548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:01.436 [2024-12-05 12:25:32.069558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.875 ms 00:22:01.436 [2024-12-05 12:25:32.069563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:32.070045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-12-05 12:25:32.070061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:01.436 [2024-12-05 12:25:32.070071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:22:01.436 [2024-12-05 12:25:32.070078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:32.116408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-12-05 12:25:32.116444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:01.436 [2024-12-05 12:25:32.116458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.310 ms 00:22:01.436 [2024-12-05 12:25:32.116472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:32.124291] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:01.436 [2024-12-05 12:25:32.138844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.436 [2024-12-05 12:25:32.138876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:01.436 [2024-12-05 12:25:32.138888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.302 ms 00:22:01.436 [2024-12-05 12:25:32.138896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.436 [2024-12-05 12:25:32.138985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.437 [2024-12-05 12:25:32.138994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:01.437 [2024-12-05 12:25:32.139002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:01.437 [2024-12-05 12:25:32.139010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.437 [2024-12-05 12:25:32.139057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.437 [2024-12-05 12:25:32.139067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:01.437 [2024-12-05 12:25:32.139073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:01.437 [2024-12-05 12:25:32.139082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.437 [2024-12-05 12:25:32.139103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.437 [2024-12-05 12:25:32.139111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:01.437 [2024-12-05 12:25:32.139117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:01.437 [2024-12-05 12:25:32.139127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.437 [2024-12-05 12:25:32.139153] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:01.437 [2024-12-05 12:25:32.139165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.437 [2024-12-05 12:25:32.139175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:01.437 [2024-12-05 12:25:32.139182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:01.437 [2024-12-05 12:25:32.139188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.437 [2024-12-05 12:25:32.158138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.437 [2024-12-05 12:25:32.158162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:01.437 [2024-12-05 12:25:32.158173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.928 ms 00:22:01.437 [2024-12-05 12:25:32.158180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.437 [2024-12-05 12:25:32.158260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.437 [2024-12-05 12:25:32.158268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:01.437 [2024-12-05 12:25:32.158277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:01.437 [2024-12-05 12:25:32.158286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.437 [2024-12-05 12:25:32.159052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:01.437 [2024-12-05 12:25:32.161407] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 241.935 ms, result 0 00:22:01.437 [2024-12-05 12:25:32.163241] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:01.437 Some configs were skipped because the RPC state that can call them passed over. 00:22:01.437 12:25:32 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:01.694 [2024-12-05 12:25:32.384502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.694 [2024-12-05 12:25:32.384541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:01.694 [2024-12-05 12:25:32.384550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:22:01.694 [2024-12-05 12:25:32.384558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.695 [2024-12-05 12:25:32.384584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.658 ms, result 0 00:22:01.695 true 00:22:01.695 12:25:32 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:01.952 [2024-12-05 12:25:32.572470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.952 [2024-12-05 12:25:32.572495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:01.952 [2024-12-05 12:25:32.572504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.347 ms 00:22:01.952 [2024-12-05 12:25:32.572510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.952 [2024-12-05 12:25:32.572538] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.423 ms, result 0 00:22:01.952 true 00:22:01.952 12:25:32 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77380 00:22:01.952 12:25:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77380 ']' 00:22:01.952 12:25:32 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77380 00:22:01.952 12:25:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:01.952 12:25:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.952 12:25:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77380 00:22:01.952 12:25:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:01.952 killing process with pid 77380 00:22:01.952 12:25:32 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:01.952 12:25:32 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77380' 00:22:01.952 12:25:32 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77380 00:22:01.952 12:25:32 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77380 00:22:02.522 [2024-12-05 12:25:33.165818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.522 [2024-12-05 12:25:33.165864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:02.522 [2024-12-05 12:25:33.165876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:02.522 [2024-12-05 12:25:33.165884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.522 [2024-12-05 12:25:33.165904] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:02.522 [2024-12-05 12:25:33.168120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.522 [2024-12-05 12:25:33.168142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:02.522 [2024-12-05 12:25:33.168155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.200 ms 00:22:02.522 [2024-12-05 12:25:33.168161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.522 [2024-12-05 12:25:33.168414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.522 [2024-12-05 12:25:33.168423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:02.522 [2024-12-05 12:25:33.168431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:22:02.522 [2024-12-05 12:25:33.168437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.522 [2024-12-05 12:25:33.172049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.522 [2024-12-05 12:25:33.172072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:02.522 [2024-12-05 12:25:33.172084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.593 ms 00:22:02.522 [2024-12-05 12:25:33.172090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.522 [2024-12-05 12:25:33.177343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.522 [2024-12-05 12:25:33.177365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:02.522 [2024-12-05 12:25:33.177375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.220 ms 00:22:02.522 [2024-12-05 12:25:33.177381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.522 [2024-12-05 12:25:33.185858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.522 [2024-12-05 12:25:33.185887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:02.522 [2024-12-05 12:25:33.185898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.413 ms 00:22:02.522 [2024-12-05 12:25:33.185904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.522 [2024-12-05 12:25:33.193477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.522 [2024-12-05 12:25:33.193501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:02.522 [2024-12-05 12:25:33.193510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.538 ms 00:22:02.522 [2024-12-05 12:25:33.193518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.522 [2024-12-05 12:25:33.193629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.522 [2024-12-05 12:25:33.193637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:02.522 [2024-12-05 12:25:33.193646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:02.522 [2024-12-05 12:25:33.193652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.522 [2024-12-05 12:25:33.202337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.522 [2024-12-05 12:25:33.202358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:02.522 [2024-12-05 12:25:33.202367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.667 ms 00:22:02.522 [2024-12-05 12:25:33.202373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.522 [2024-12-05 12:25:33.210487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.522 [2024-12-05 12:25:33.210508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:02.522 [2024-12-05 12:25:33.210519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.081 ms 00:22:02.522 [2024-12-05 12:25:33.210524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.522 [2024-12-05 12:25:33.218167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.523 [2024-12-05 12:25:33.218188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:02.523 [2024-12-05 12:25:33.218197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.601 ms 00:22:02.523 [2024-12-05 12:25:33.218203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.523 [2024-12-05 12:25:33.225897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.523 [2024-12-05 12:25:33.225919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:02.523 [2024-12-05 12:25:33.225927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.642 ms 00:22:02.523 [2024-12-05 12:25:33.225933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.523 [2024-12-05 12:25:33.225961] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:02.523 [2024-12-05 12:25:33.225973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.225983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.225989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.225997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:02.523 [2024-12-05 12:25:33.226540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:02.524 [2024-12-05 12:25:33.226684] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:02.524 [2024-12-05 12:25:33.226695] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b202c494-44a1-46c8-8ff5-771af2981a3d 00:22:02.524 [2024-12-05 12:25:33.226703] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:02.524 [2024-12-05 12:25:33.226711] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:02.524 [2024-12-05 12:25:33.226717] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:02.524 [2024-12-05 12:25:33.226725] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:02.524 [2024-12-05 12:25:33.226731] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:02.524 [2024-12-05 12:25:33.226738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:02.524 [2024-12-05 12:25:33.226744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:02.524 [2024-12-05 12:25:33.226750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:02.524 [2024-12-05 12:25:33.226755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:02.524 [2024-12-05 12:25:33.226762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.524 [2024-12-05 12:25:33.226769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:02.524 [2024-12-05 12:25:33.226778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:22:02.524 [2024-12-05 12:25:33.226783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.524 [2024-12-05 12:25:33.236950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.524 [2024-12-05 12:25:33.236970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:02.524 [2024-12-05 12:25:33.236982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.147 ms 00:22:02.524 [2024-12-05 12:25:33.236989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.524 [2024-12-05 12:25:33.237311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.524 [2024-12-05 12:25:33.237320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:02.524 [2024-12-05 12:25:33.237332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:22:02.524 [2024-12-05 12:25:33.237338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.524 [2024-12-05 12:25:33.274096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.524 [2024-12-05 12:25:33.274120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:02.524 [2024-12-05 12:25:33.274130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.524 [2024-12-05 12:25:33.274136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.524 [2024-12-05 12:25:33.274226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.524 [2024-12-05 12:25:33.274234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:02.524 [2024-12-05 12:25:33.274245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.524 [2024-12-05 12:25:33.274250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.524 [2024-12-05 12:25:33.274292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.524 [2024-12-05 12:25:33.274300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:02.524 [2024-12-05 12:25:33.274309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.524 [2024-12-05 12:25:33.274315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.524 [2024-12-05 12:25:33.274330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.524 [2024-12-05 12:25:33.274337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:02.524 [2024-12-05 12:25:33.274344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.524 [2024-12-05 12:25:33.274351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.524 [2024-12-05 12:25:33.336640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.524 [2024-12-05 12:25:33.336669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:02.524 [2024-12-05 12:25:33.336679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.524 [2024-12-05 12:25:33.336686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.524 [2024-12-05 12:25:33.388486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.524 [2024-12-05 12:25:33.388516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.524 [2024-12-05 12:25:33.388526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.524 [2024-12-05 12:25:33.388536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.524 [2024-12-05 12:25:33.388613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.524 [2024-12-05 12:25:33.388621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.524 [2024-12-05 12:25:33.388632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.524 [2024-12-05 12:25:33.388639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.524 [2024-12-05 12:25:33.388666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.524 [2024-12-05 12:25:33.388673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.524 [2024-12-05 12:25:33.388682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.524 [2024-12-05 12:25:33.388688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.524 [2024-12-05 12:25:33.388772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.524 [2024-12-05 12:25:33.388779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.524 [2024-12-05 12:25:33.388787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.524 [2024-12-05 12:25:33.388794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.524 [2024-12-05 12:25:33.388823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.524 [2024-12-05 12:25:33.388830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:02.524 [2024-12-05 12:25:33.388838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.524 [2024-12-05 12:25:33.388844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.524 [2024-12-05 12:25:33.388882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.524 [2024-12-05 12:25:33.388889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.524 [2024-12-05 12:25:33.388898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.524 [2024-12-05 12:25:33.388905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.524 [2024-12-05 12:25:33.388947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.524 [2024-12-05 12:25:33.388955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.524 [2024-12-05 12:25:33.388962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.524 [2024-12-05 12:25:33.388968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.783 [2024-12-05 12:25:33.389098] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 223.258 ms, result 0 00:22:03.350 12:25:33 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:03.350 [2024-12-05 12:25:34.017242] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:22:03.350 [2024-12-05 12:25:34.017355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77428 ] 00:22:03.350 [2024-12-05 12:25:34.175293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.609 [2024-12-05 12:25:34.281875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.868 [2024-12-05 12:25:34.515024] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:03.868 [2024-12-05 12:25:34.515077] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:03.868 [2024-12-05 12:25:34.668555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.868 [2024-12-05 12:25:34.668589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:03.868 [2024-12-05 12:25:34.668600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:03.868 [2024-12-05 12:25:34.668608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.868 [2024-12-05 12:25:34.670828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.868 [2024-12-05 12:25:34.670854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:03.868 [2024-12-05 12:25:34.670862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.207 ms 00:22:03.868 [2024-12-05 12:25:34.670869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.868 [2024-12-05 12:25:34.670931] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:03.868 [2024-12-05 12:25:34.671483] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:03.868 [2024-12-05 12:25:34.671503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.868 [2024-12-05 12:25:34.671509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:03.868 [2024-12-05 12:25:34.671517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:22:03.868 [2024-12-05 12:25:34.671523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.868 [2024-12-05 12:25:34.673051] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:03.868 [2024-12-05 12:25:34.683646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.868 [2024-12-05 12:25:34.683673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:03.868 [2024-12-05 12:25:34.683683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.597 ms 00:22:03.868 [2024-12-05 12:25:34.683691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.868 [2024-12-05 12:25:34.683766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.869 [2024-12-05 12:25:34.683775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:03.869 [2024-12-05 12:25:34.683782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:03.869 [2024-12-05 12:25:34.683788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.869 [2024-12-05 12:25:34.690065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.869 [2024-12-05 12:25:34.690086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:03.869 [2024-12-05 12:25:34.690093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.242 ms 00:22:03.869 [2024-12-05 12:25:34.690099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.869 [2024-12-05 12:25:34.690173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.869 [2024-12-05 12:25:34.690180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:03.869 [2024-12-05 12:25:34.690187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:22:03.869 [2024-12-05 12:25:34.690196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.869 [2024-12-05 12:25:34.690213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.869 [2024-12-05 12:25:34.690219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:03.869 [2024-12-05 12:25:34.690226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:03.869 [2024-12-05 12:25:34.690232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.869 [2024-12-05 12:25:34.690254] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:03.869 [2024-12-05 12:25:34.693184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.869 [2024-12-05 12:25:34.693207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:03.869 [2024-12-05 12:25:34.693215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.937 ms 00:22:03.869 [2024-12-05 12:25:34.693220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.869 [2024-12-05 12:25:34.693252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.869 [2024-12-05 12:25:34.693259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:03.869 [2024-12-05 12:25:34.693266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:03.869 [2024-12-05 12:25:34.693274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.869 [2024-12-05 12:25:34.693289] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:03.869 [2024-12-05 12:25:34.693304] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:03.869 [2024-12-05 12:25:34.693333] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:03.869 [2024-12-05 12:25:34.693345] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:03.869 [2024-12-05 12:25:34.693427] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:03.869 [2024-12-05 12:25:34.693436] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:03.869 [2024-12-05 12:25:34.693447] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:03.869 [2024-12-05 12:25:34.693454] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:03.869 [2024-12-05 12:25:34.693472] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:03.869 [2024-12-05 12:25:34.693479] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:03.869 [2024-12-05 12:25:34.693485] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:03.869 [2024-12-05 12:25:34.693491] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:03.869 [2024-12-05 12:25:34.693497] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:03.869 [2024-12-05 12:25:34.693504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.869 [2024-12-05 12:25:34.693510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:03.869 [2024-12-05 12:25:34.693517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:22:03.869 [2024-12-05 12:25:34.693522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.869 [2024-12-05 12:25:34.693592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.869 [2024-12-05 12:25:34.693599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:03.869 [2024-12-05 12:25:34.693605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:03.869 [2024-12-05 12:25:34.693610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.869 [2024-12-05 12:25:34.693687] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:03.869 [2024-12-05 12:25:34.693695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:03.869 [2024-12-05 12:25:34.693701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:03.869 [2024-12-05 12:25:34.693708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.869 [2024-12-05 12:25:34.693716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:03.869 [2024-12-05 12:25:34.693722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:03.869 [2024-12-05 12:25:34.693728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:03.869 [2024-12-05 12:25:34.693734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:03.869 [2024-12-05 12:25:34.693739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:03.869 [2024-12-05 12:25:34.693745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:03.869 [2024-12-05 12:25:34.693751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:03.869 [2024-12-05 12:25:34.693761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:03.869 [2024-12-05 12:25:34.693766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:03.869 [2024-12-05 12:25:34.693772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:03.869 [2024-12-05 12:25:34.693777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:03.869 [2024-12-05 12:25:34.693782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.869 [2024-12-05 12:25:34.693787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:03.869 [2024-12-05 12:25:34.693792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:03.869 [2024-12-05 12:25:34.693797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.869 [2024-12-05 12:25:34.693802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:03.869 [2024-12-05 12:25:34.693807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:03.869 [2024-12-05 12:25:34.693812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.869 [2024-12-05 12:25:34.693817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:03.869 [2024-12-05 12:25:34.693822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:03.869 [2024-12-05 12:25:34.693827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.869 [2024-12-05 12:25:34.693832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:03.869 [2024-12-05 12:25:34.693837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:03.869 [2024-12-05 12:25:34.693842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.869 [2024-12-05 12:25:34.693848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:03.869 [2024-12-05 12:25:34.693853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:03.869 [2024-12-05 12:25:34.693857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.869 [2024-12-05 12:25:34.693862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:03.869 [2024-12-05 12:25:34.693867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:03.869 [2024-12-05 12:25:34.693872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:03.869 [2024-12-05 12:25:34.693877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:03.869 [2024-12-05 12:25:34.693882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:03.869 [2024-12-05 12:25:34.693889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:03.869 [2024-12-05 12:25:34.693894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:03.869 [2024-12-05 12:25:34.693900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:03.869 [2024-12-05 12:25:34.693904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.869 [2024-12-05 12:25:34.693910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:03.869 [2024-12-05 12:25:34.693915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:03.869 [2024-12-05 12:25:34.693920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.869 [2024-12-05 12:25:34.693925] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:03.869 [2024-12-05 12:25:34.693934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:03.869 [2024-12-05 12:25:34.693940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:03.869 [2024-12-05 12:25:34.693946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.869 [2024-12-05 12:25:34.693952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:03.869 [2024-12-05 12:25:34.693957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:03.869 [2024-12-05 12:25:34.693962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:03.869 [2024-12-05 12:25:34.693967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:03.869 [2024-12-05 12:25:34.693972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:03.869 [2024-12-05 12:25:34.693977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:03.869 [2024-12-05 12:25:34.693983] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:03.869 [2024-12-05 12:25:34.693990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:03.869 [2024-12-05 12:25:34.693997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:03.869 [2024-12-05 12:25:34.694002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:03.869 [2024-12-05 12:25:34.694008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:03.870 [2024-12-05 12:25:34.694013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:03.870 [2024-12-05 12:25:34.694019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:03.870 [2024-12-05 12:25:34.694024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:03.870 [2024-12-05 12:25:34.694029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:03.870 [2024-12-05 12:25:34.694034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:03.870 [2024-12-05 12:25:34.694039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:03.870 [2024-12-05 12:25:34.694044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:03.870 [2024-12-05 12:25:34.694050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:03.870 [2024-12-05 12:25:34.694055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:03.870 [2024-12-05 12:25:34.694061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:03.870 [2024-12-05 12:25:34.694067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:03.870 [2024-12-05 12:25:34.694072] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:03.870 [2024-12-05 12:25:34.694079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:03.870 [2024-12-05 12:25:34.694087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:03.870 [2024-12-05 12:25:34.694093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:03.870 [2024-12-05 12:25:34.694099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:03.870 [2024-12-05 12:25:34.694104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:03.870 [2024-12-05 12:25:34.694110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.870 [2024-12-05 12:25:34.694116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:03.870 [2024-12-05 12:25:34.694122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:22:03.870 [2024-12-05 12:25:34.694129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.870 [2024-12-05 12:25:34.718289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.870 [2024-12-05 12:25:34.718314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:03.870 [2024-12-05 12:25:34.718322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.102 ms 00:22:03.870 [2024-12-05 12:25:34.718331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.870 [2024-12-05 12:25:34.718426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.870 [2024-12-05 12:25:34.718434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:03.870 [2024-12-05 12:25:34.718441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:03.870 [2024-12-05 12:25:34.718447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.762290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.762320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:04.130 [2024-12-05 12:25:34.762330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.816 ms 00:22:04.130 [2024-12-05 12:25:34.762337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.762413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.762422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:04.130 [2024-12-05 12:25:34.762430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:04.130 [2024-12-05 12:25:34.762436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.762830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.762850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:04.130 [2024-12-05 12:25:34.762864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:22:04.130 [2024-12-05 12:25:34.762871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.762987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.763000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:04.130 [2024-12-05 12:25:34.763007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:22:04.130 [2024-12-05 12:25:34.763013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.775248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.775271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:04.130 [2024-12-05 12:25:34.775279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.218 ms 00:22:04.130 [2024-12-05 12:25:34.775284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.785765] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:04.130 [2024-12-05 12:25:34.785793] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:04.130 [2024-12-05 12:25:34.785802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.785809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:04.130 [2024-12-05 12:25:34.785817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.442 ms 00:22:04.130 [2024-12-05 12:25:34.785823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.804507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.804535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:04.130 [2024-12-05 12:25:34.804544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.627 ms 00:22:04.130 [2024-12-05 12:25:34.804551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.813961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.813986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:04.130 [2024-12-05 12:25:34.813993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.352 ms 00:22:04.130 [2024-12-05 12:25:34.813999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.822914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.822937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:04.130 [2024-12-05 12:25:34.822945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.870 ms 00:22:04.130 [2024-12-05 12:25:34.822951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.823430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.823446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:04.130 [2024-12-05 12:25:34.823453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:22:04.130 [2024-12-05 12:25:34.823459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.871297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.871331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:04.130 [2024-12-05 12:25:34.871341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.810 ms 00:22:04.130 [2024-12-05 12:25:34.871348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.879647] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:04.130 [2024-12-05 12:25:34.894059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.894091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:04.130 [2024-12-05 12:25:34.894106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.632 ms 00:22:04.130 [2024-12-05 12:25:34.894113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.894191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.894200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:04.130 [2024-12-05 12:25:34.894208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:04.130 [2024-12-05 12:25:34.894214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.894259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.894267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:04.130 [2024-12-05 12:25:34.894277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:04.130 [2024-12-05 12:25:34.894285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.894310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.894317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:04.130 [2024-12-05 12:25:34.894324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:04.130 [2024-12-05 12:25:34.894330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.894356] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:04.130 [2024-12-05 12:25:34.894365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.894372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:04.130 [2024-12-05 12:25:34.894378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:04.130 [2024-12-05 12:25:34.894384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.913661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.913802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:04.130 [2024-12-05 12:25:34.913816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.256 ms 00:22:04.130 [2024-12-05 12:25:34.913823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.913900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:04.130 [2024-12-05 12:25:34.913909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:04.130 [2024-12-05 12:25:34.913916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:04.130 [2024-12-05 12:25:34.913924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.130 [2024-12-05 12:25:34.914705] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:04.130 [2024-12-05 12:25:34.916999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 245.878 ms, result 0 00:22:04.130 [2024-12-05 12:25:34.918236] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:04.130 [2024-12-05 12:25:34.928851] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:05.506  [2024-12-05T12:25:37.307Z] Copying: 17/256 [MB] (17 MBps) [2024-12-05T12:25:38.239Z] Copying: 31/256 [MB] (13 MBps) [2024-12-05T12:25:39.175Z] Copying: 47/256 [MB] (16 MBps) [2024-12-05T12:25:40.116Z] Copying: 63/256 [MB] (15 MBps) [2024-12-05T12:25:41.051Z] Copying: 74/256 [MB] (10 MBps) [2024-12-05T12:25:41.986Z] Copying: 88/256 [MB] (14 MBps) [2024-12-05T12:25:43.362Z] Copying: 104/256 [MB] (15 MBps) [2024-12-05T12:25:44.305Z] Copying: 121/256 [MB] (16 MBps) [2024-12-05T12:25:45.251Z] Copying: 138/256 [MB] (17 MBps) [2024-12-05T12:25:46.190Z] Copying: 152/256 [MB] (13 MBps) [2024-12-05T12:25:47.129Z] Copying: 169/256 [MB] (17 MBps) [2024-12-05T12:25:48.070Z] Copying: 190/256 [MB] (21 MBps) [2024-12-05T12:25:49.013Z] Copying: 205/256 [MB] (14 MBps) [2024-12-05T12:25:50.391Z] Copying: 217/256 [MB] (12 MBps) [2024-12-05T12:25:51.328Z] Copying: 232/256 [MB] (15 MBps) [2024-12-05T12:25:51.588Z] Copying: 249/256 [MB] (16 MBps) [2024-12-05T12:25:52.160Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-05 12:25:51.846427] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:21.291 [2024-12-05 12:25:51.858124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.291 [2024-12-05 12:25:51.858184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:21.291 [2024-12-05 12:25:51.858213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:21.291 [2024-12-05 12:25:51.858224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.291 [2024-12-05 12:25:51.858260] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:21.291 [2024-12-05 12:25:51.861684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.291 [2024-12-05 12:25:51.861912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:21.291 [2024-12-05 12:25:51.861939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.407 ms 00:22:21.291 [2024-12-05 12:25:51.861949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.291 [2024-12-05 12:25:51.862288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.291 [2024-12-05 12:25:51.862302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:21.291 [2024-12-05 12:25:51.862314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:22:21.291 [2024-12-05 12:25:51.862323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.291 [2024-12-05 12:25:51.866071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.291 [2024-12-05 12:25:51.866099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:21.291 [2024-12-05 12:25:51.866112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.727 ms 00:22:21.291 [2024-12-05 12:25:51.866122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.291 [2024-12-05 12:25:51.874119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.291 [2024-12-05 12:25:51.874170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:21.291 [2024-12-05 12:25:51.874183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.970 ms 00:22:21.291 [2024-12-05 12:25:51.874193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.291 [2024-12-05 12:25:51.902655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.291 [2024-12-05 12:25:51.902707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:21.291 [2024-12-05 12:25:51.902722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.366 ms 00:22:21.291 [2024-12-05 12:25:51.902731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.291 [2024-12-05 12:25:51.920577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.291 [2024-12-05 12:25:51.920812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:21.291 [2024-12-05 12:25:51.920839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.765 ms 00:22:21.291 [2024-12-05 12:25:51.920849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.291 [2024-12-05 12:25:51.921167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.291 [2024-12-05 12:25:51.921189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:21.291 [2024-12-05 12:25:51.921217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:22:21.291 [2024-12-05 12:25:51.921226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.291 [2024-12-05 12:25:51.948127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.291 [2024-12-05 12:25:51.948325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:21.291 [2024-12-05 12:25:51.948347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.880 ms 00:22:21.291 [2024-12-05 12:25:51.948356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.291 [2024-12-05 12:25:51.981708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.291 [2024-12-05 12:25:51.981766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:21.291 [2024-12-05 12:25:51.981782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.871 ms 00:22:21.291 [2024-12-05 12:25:51.981791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.291 [2024-12-05 12:25:52.006855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.291 [2024-12-05 12:25:52.007079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:21.291 [2024-12-05 12:25:52.007103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.007 ms 00:22:21.291 [2024-12-05 12:25:52.007111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.291 [2024-12-05 12:25:52.032455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.291 [2024-12-05 12:25:52.032510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:21.291 [2024-12-05 12:25:52.032523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.953 ms 00:22:21.291 [2024-12-05 12:25:52.032532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.291 [2024-12-05 12:25:52.032583] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:21.291 [2024-12-05 12:25:52.032603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.032995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:21.291 [2024-12-05 12:25:52.033200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:21.292 [2024-12-05 12:25:52.033515] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:21.292 [2024-12-05 12:25:52.033526] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b202c494-44a1-46c8-8ff5-771af2981a3d 00:22:21.292 [2024-12-05 12:25:52.033536] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:21.292 [2024-12-05 12:25:52.033545] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:21.292 [2024-12-05 12:25:52.033553] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:21.292 [2024-12-05 12:25:52.033562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:21.292 [2024-12-05 12:25:52.033588] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:21.292 [2024-12-05 12:25:52.033603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:21.292 [2024-12-05 12:25:52.033612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:21.292 [2024-12-05 12:25:52.033632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:21.292 [2024-12-05 12:25:52.033640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:21.292 [2024-12-05 12:25:52.033649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.292 [2024-12-05 12:25:52.033659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:21.292 [2024-12-05 12:25:52.033669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:22:21.292 [2024-12-05 12:25:52.033679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.292 [2024-12-05 12:25:52.048329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.292 [2024-12-05 12:25:52.048372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:21.292 [2024-12-05 12:25:52.048386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.612 ms 00:22:21.292 [2024-12-05 12:25:52.048401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.292 [2024-12-05 12:25:52.048889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.292 [2024-12-05 12:25:52.048909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:21.292 [2024-12-05 12:25:52.048920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:22:21.292 [2024-12-05 12:25:52.048929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.292 [2024-12-05 12:25:52.091279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.292 [2024-12-05 12:25:52.091332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:21.292 [2024-12-05 12:25:52.091351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.292 [2024-12-05 12:25:52.091360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.292 [2024-12-05 12:25:52.091455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.292 [2024-12-05 12:25:52.091486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:21.292 [2024-12-05 12:25:52.091498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.292 [2024-12-05 12:25:52.091507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.292 [2024-12-05 12:25:52.091590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.292 [2024-12-05 12:25:52.091602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:21.292 [2024-12-05 12:25:52.091615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.292 [2024-12-05 12:25:52.091625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.292 [2024-12-05 12:25:52.091650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.292 [2024-12-05 12:25:52.091661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:21.292 [2024-12-05 12:25:52.091670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.292 [2024-12-05 12:25:52.091678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.552 [2024-12-05 12:25:52.183370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.552 [2024-12-05 12:25:52.183648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:21.552 [2024-12-05 12:25:52.183678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.552 [2024-12-05 12:25:52.183698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.552 [2024-12-05 12:25:52.258953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.552 [2024-12-05 12:25:52.259193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:21.552 [2024-12-05 12:25:52.259216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.552 [2024-12-05 12:25:52.259225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.552 [2024-12-05 12:25:52.259301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.552 [2024-12-05 12:25:52.259312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:21.552 [2024-12-05 12:25:52.259322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.552 [2024-12-05 12:25:52.259331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.552 [2024-12-05 12:25:52.259376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.552 [2024-12-05 12:25:52.259388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:21.552 [2024-12-05 12:25:52.259398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.552 [2024-12-05 12:25:52.259407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.552 [2024-12-05 12:25:52.259565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.552 [2024-12-05 12:25:52.259581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:21.552 [2024-12-05 12:25:52.259591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.552 [2024-12-05 12:25:52.259599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.552 [2024-12-05 12:25:52.259652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.552 [2024-12-05 12:25:52.259670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:21.552 [2024-12-05 12:25:52.259679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.552 [2024-12-05 12:25:52.259687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.552 [2024-12-05 12:25:52.259742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.552 [2024-12-05 12:25:52.259753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:21.552 [2024-12-05 12:25:52.259764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.552 [2024-12-05 12:25:52.259773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.552 [2024-12-05 12:25:52.259837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.552 [2024-12-05 12:25:52.259851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:21.552 [2024-12-05 12:25:52.259861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.552 [2024-12-05 12:25:52.259870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.552 [2024-12-05 12:25:52.260066] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 401.938 ms, result 0 00:22:22.119 00:22:22.119 00:22:22.119 12:25:52 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:22.686 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:22.686 12:25:53 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:22.686 12:25:53 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:22.686 12:25:53 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:22.686 12:25:53 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:22.686 12:25:53 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:22.945 12:25:53 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:22.945 Process with pid 77380 is not found 00:22:22.945 12:25:53 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77380 00:22:22.945 12:25:53 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77380 ']' 00:22:22.945 12:25:53 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77380 00:22:22.945 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77380) - No such process 00:22:22.945 12:25:53 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 77380 is not found' 00:22:22.945 ************************************ 00:22:22.945 END TEST ftl_trim 00:22:22.945 ************************************ 00:22:22.945 00:22:22.945 real 1m21.614s 00:22:22.945 user 1m37.710s 00:22:22.945 sys 0m14.990s 00:22:22.945 12:25:53 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:22.945 12:25:53 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:22.945 12:25:53 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:22.945 12:25:53 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:22.945 12:25:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:22.945 12:25:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:22.945 ************************************ 00:22:22.945 START TEST ftl_restore 00:22:22.945 ************************************ 00:22:22.945 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:22.945 * Looking for test storage... 00:22:22.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:22.946 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:22.946 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:22.946 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:22:22.946 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:22:22.946 12:25:53 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:22:23.207 12:25:53 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:22:23.207 12:25:53 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.207 12:25:53 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:22:23.208 12:25:53 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.208 12:25:53 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.208 12:25:53 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.208 12:25:53 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:22:23.208 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.208 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:23.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.208 --rc genhtml_branch_coverage=1 00:22:23.208 --rc genhtml_function_coverage=1 00:22:23.208 --rc genhtml_legend=1 00:22:23.208 --rc geninfo_all_blocks=1 00:22:23.208 --rc geninfo_unexecuted_blocks=1 00:22:23.208 00:22:23.208 ' 00:22:23.208 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:23.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.208 --rc genhtml_branch_coverage=1 00:22:23.208 --rc genhtml_function_coverage=1 00:22:23.208 --rc genhtml_legend=1 00:22:23.208 --rc geninfo_all_blocks=1 00:22:23.208 --rc geninfo_unexecuted_blocks=1 00:22:23.208 00:22:23.208 ' 00:22:23.208 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:23.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.208 --rc genhtml_branch_coverage=1 00:22:23.208 --rc genhtml_function_coverage=1 00:22:23.208 --rc genhtml_legend=1 00:22:23.208 --rc geninfo_all_blocks=1 00:22:23.208 --rc geninfo_unexecuted_blocks=1 00:22:23.208 00:22:23.208 ' 00:22:23.208 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:23.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.208 --rc genhtml_branch_coverage=1 00:22:23.208 --rc genhtml_function_coverage=1 00:22:23.208 --rc genhtml_legend=1 00:22:23.208 --rc geninfo_all_blocks=1 00:22:23.208 --rc geninfo_unexecuted_blocks=1 00:22:23.208 00:22:23.208 ' 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:23.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.VHJJ9ww0HF 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77699 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77699 00:22:23.208 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77699 ']' 00:22:23.208 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.208 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.208 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.208 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.208 12:25:53 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:23.208 12:25:53 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:23.208 [2024-12-05 12:25:53.921853] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:22:23.208 [2024-12-05 12:25:53.921996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77699 ] 00:22:23.470 [2024-12-05 12:25:54.087511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.470 [2024-12-05 12:25:54.237211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.411 12:25:54 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.411 12:25:54 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:22:24.411 12:25:54 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:24.411 12:25:54 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:24.411 12:25:54 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:24.411 12:25:54 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:24.411 12:25:54 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:24.411 12:25:54 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:24.670 12:25:55 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:24.670 12:25:55 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:24.670 12:25:55 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:24.670 12:25:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:24.670 12:25:55 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:24.670 12:25:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:24.670 12:25:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:24.670 12:25:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:24.670 12:25:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:24.670 { 00:22:24.670 "name": "nvme0n1", 00:22:24.670 "aliases": [ 00:22:24.670 "d62fd711-ebcf-4f59-8b9d-58a1608a3070" 00:22:24.670 ], 00:22:24.670 "product_name": "NVMe disk", 00:22:24.670 "block_size": 4096, 00:22:24.670 "num_blocks": 1310720, 00:22:24.670 "uuid": "d62fd711-ebcf-4f59-8b9d-58a1608a3070", 00:22:24.670 "numa_id": -1, 00:22:24.670 "assigned_rate_limits": { 00:22:24.670 "rw_ios_per_sec": 0, 00:22:24.670 "rw_mbytes_per_sec": 0, 00:22:24.670 "r_mbytes_per_sec": 0, 00:22:24.670 "w_mbytes_per_sec": 0 00:22:24.670 }, 00:22:24.670 "claimed": true, 00:22:24.670 "claim_type": "read_many_write_one", 00:22:24.670 "zoned": false, 00:22:24.670 "supported_io_types": { 00:22:24.670 "read": true, 00:22:24.670 "write": true, 00:22:24.670 "unmap": true, 00:22:24.670 "flush": true, 00:22:24.670 "reset": true, 00:22:24.670 "nvme_admin": true, 00:22:24.670 "nvme_io": true, 00:22:24.670 "nvme_io_md": false, 00:22:24.670 "write_zeroes": true, 00:22:24.670 "zcopy": false, 00:22:24.670 "get_zone_info": false, 00:22:24.670 "zone_management": false, 00:22:24.670 "zone_append": false, 00:22:24.670 "compare": true, 00:22:24.670 "compare_and_write": false, 00:22:24.670 "abort": true, 00:22:24.670 "seek_hole": false, 00:22:24.670 "seek_data": false, 00:22:24.670 "copy": true, 00:22:24.670 "nvme_iov_md": false 00:22:24.670 }, 00:22:24.670 "driver_specific": { 00:22:24.670 "nvme": [ 00:22:24.670 { 00:22:24.670 "pci_address": "0000:00:11.0", 00:22:24.670 "trid": { 00:22:24.670 "trtype": "PCIe", 00:22:24.670 "traddr": "0000:00:11.0" 00:22:24.670 }, 00:22:24.670 "ctrlr_data": { 00:22:24.670 "cntlid": 0, 00:22:24.670 "vendor_id": "0x1b36", 00:22:24.670 "model_number": "QEMU NVMe Ctrl", 00:22:24.670 "serial_number": "12341", 00:22:24.670 "firmware_revision": "8.0.0", 00:22:24.670 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:24.670 "oacs": { 00:22:24.670 "security": 0, 00:22:24.670 "format": 1, 00:22:24.670 "firmware": 0, 00:22:24.670 "ns_manage": 1 00:22:24.670 }, 00:22:24.670 "multi_ctrlr": false, 00:22:24.670 "ana_reporting": false 00:22:24.670 }, 00:22:24.671 "vs": { 00:22:24.671 "nvme_version": "1.4" 00:22:24.671 }, 00:22:24.671 "ns_data": { 00:22:24.671 "id": 1, 00:22:24.671 "can_share": false 00:22:24.671 } 00:22:24.671 } 00:22:24.671 ], 00:22:24.671 "mp_policy": "active_passive" 00:22:24.671 } 00:22:24.671 } 00:22:24.671 ]' 00:22:24.671 12:25:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:24.671 12:25:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:24.931 12:25:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:24.931 12:25:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:24.931 12:25:55 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:24.931 12:25:55 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:22:24.931 12:25:55 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:24.931 12:25:55 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:24.931 12:25:55 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:24.931 12:25:55 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:24.931 12:25:55 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:25.192 12:25:55 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=20bcbf8b-f50d-49a9-93cf-88ac26ad90ab 00:22:25.192 12:25:55 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:25.192 12:25:55 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 20bcbf8b-f50d-49a9-93cf-88ac26ad90ab 00:22:25.192 12:25:55 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:25.453 12:25:56 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=836b3555-8fb1-4cfc-8aa1-c48e5a983523 00:22:25.453 12:25:56 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 836b3555-8fb1-4cfc-8aa1-c48e5a983523 00:22:25.715 12:25:56 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=bfd85bb8-f078-40b3-8d61-dbc588407143 00:22:25.715 12:25:56 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:25.715 12:25:56 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 bfd85bb8-f078-40b3-8d61-dbc588407143 00:22:25.715 12:25:56 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:25.715 12:25:56 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:25.715 12:25:56 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=bfd85bb8-f078-40b3-8d61-dbc588407143 00:22:25.715 12:25:56 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:25.715 12:25:56 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size bfd85bb8-f078-40b3-8d61-dbc588407143 00:22:25.715 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=bfd85bb8-f078-40b3-8d61-dbc588407143 00:22:25.715 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:25.715 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:25.715 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:25.715 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bfd85bb8-f078-40b3-8d61-dbc588407143 00:22:25.977 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:25.977 { 00:22:25.977 "name": "bfd85bb8-f078-40b3-8d61-dbc588407143", 00:22:25.977 "aliases": [ 00:22:25.977 "lvs/nvme0n1p0" 00:22:25.977 ], 00:22:25.977 "product_name": "Logical Volume", 00:22:25.977 "block_size": 4096, 00:22:25.977 "num_blocks": 26476544, 00:22:25.977 "uuid": "bfd85bb8-f078-40b3-8d61-dbc588407143", 00:22:25.977 "assigned_rate_limits": { 00:22:25.977 "rw_ios_per_sec": 0, 00:22:25.977 "rw_mbytes_per_sec": 0, 00:22:25.977 "r_mbytes_per_sec": 0, 00:22:25.977 "w_mbytes_per_sec": 0 00:22:25.977 }, 00:22:25.977 "claimed": false, 00:22:25.977 "zoned": false, 00:22:25.977 "supported_io_types": { 00:22:25.977 "read": true, 00:22:25.977 "write": true, 00:22:25.977 "unmap": true, 00:22:25.977 "flush": false, 00:22:25.977 "reset": true, 00:22:25.977 "nvme_admin": false, 00:22:25.977 "nvme_io": false, 00:22:25.977 "nvme_io_md": false, 00:22:25.977 "write_zeroes": true, 00:22:25.977 "zcopy": false, 00:22:25.977 "get_zone_info": false, 00:22:25.977 "zone_management": false, 00:22:25.977 "zone_append": false, 00:22:25.977 "compare": false, 00:22:25.977 "compare_and_write": false, 00:22:25.977 "abort": false, 00:22:25.977 "seek_hole": true, 00:22:25.977 "seek_data": true, 00:22:25.977 "copy": false, 00:22:25.977 "nvme_iov_md": false 00:22:25.977 }, 00:22:25.977 "driver_specific": { 00:22:25.977 "lvol": { 00:22:25.977 "lvol_store_uuid": "836b3555-8fb1-4cfc-8aa1-c48e5a983523", 00:22:25.977 "base_bdev": "nvme0n1", 00:22:25.977 "thin_provision": true, 00:22:25.977 "num_allocated_clusters": 0, 00:22:25.977 "snapshot": false, 00:22:25.977 "clone": false, 00:22:25.977 "esnap_clone": false 00:22:25.977 } 00:22:25.977 } 00:22:25.977 } 00:22:25.977 ]' 00:22:25.977 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:25.977 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:25.977 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:25.977 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:25.977 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:25.977 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:25.977 12:25:56 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:25.977 12:25:56 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:25.977 12:25:56 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:26.239 12:25:56 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:26.239 12:25:56 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:26.239 12:25:56 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size bfd85bb8-f078-40b3-8d61-dbc588407143 00:22:26.239 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=bfd85bb8-f078-40b3-8d61-dbc588407143 00:22:26.239 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:26.239 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:26.239 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:26.239 12:25:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bfd85bb8-f078-40b3-8d61-dbc588407143 00:22:26.497 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:26.497 { 00:22:26.497 "name": "bfd85bb8-f078-40b3-8d61-dbc588407143", 00:22:26.497 "aliases": [ 00:22:26.497 "lvs/nvme0n1p0" 00:22:26.497 ], 00:22:26.497 "product_name": "Logical Volume", 00:22:26.497 "block_size": 4096, 00:22:26.497 "num_blocks": 26476544, 00:22:26.497 "uuid": "bfd85bb8-f078-40b3-8d61-dbc588407143", 00:22:26.497 "assigned_rate_limits": { 00:22:26.497 "rw_ios_per_sec": 0, 00:22:26.497 "rw_mbytes_per_sec": 0, 00:22:26.497 "r_mbytes_per_sec": 0, 00:22:26.497 "w_mbytes_per_sec": 0 00:22:26.497 }, 00:22:26.497 "claimed": false, 00:22:26.497 "zoned": false, 00:22:26.497 "supported_io_types": { 00:22:26.497 "read": true, 00:22:26.497 "write": true, 00:22:26.497 "unmap": true, 00:22:26.497 "flush": false, 00:22:26.498 "reset": true, 00:22:26.498 "nvme_admin": false, 00:22:26.498 "nvme_io": false, 00:22:26.498 "nvme_io_md": false, 00:22:26.498 "write_zeroes": true, 00:22:26.498 "zcopy": false, 00:22:26.498 "get_zone_info": false, 00:22:26.498 "zone_management": false, 00:22:26.498 "zone_append": false, 00:22:26.498 "compare": false, 00:22:26.498 "compare_and_write": false, 00:22:26.498 "abort": false, 00:22:26.498 "seek_hole": true, 00:22:26.498 "seek_data": true, 00:22:26.498 "copy": false, 00:22:26.498 "nvme_iov_md": false 00:22:26.498 }, 00:22:26.498 "driver_specific": { 00:22:26.498 "lvol": { 00:22:26.498 "lvol_store_uuid": "836b3555-8fb1-4cfc-8aa1-c48e5a983523", 00:22:26.498 "base_bdev": "nvme0n1", 00:22:26.498 "thin_provision": true, 00:22:26.498 "num_allocated_clusters": 0, 00:22:26.498 "snapshot": false, 00:22:26.498 "clone": false, 00:22:26.498 "esnap_clone": false 00:22:26.498 } 00:22:26.498 } 00:22:26.498 } 00:22:26.498 ]' 00:22:26.498 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:26.498 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:26.498 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:26.498 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:26.498 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:26.498 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:26.498 12:25:57 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:26.498 12:25:57 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:26.756 12:25:57 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:26.756 12:25:57 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size bfd85bb8-f078-40b3-8d61-dbc588407143 00:22:26.756 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=bfd85bb8-f078-40b3-8d61-dbc588407143 00:22:26.756 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:26.756 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:26.756 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:26.756 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bfd85bb8-f078-40b3-8d61-dbc588407143 00:22:27.016 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:27.016 { 00:22:27.016 "name": "bfd85bb8-f078-40b3-8d61-dbc588407143", 00:22:27.016 "aliases": [ 00:22:27.016 "lvs/nvme0n1p0" 00:22:27.016 ], 00:22:27.016 "product_name": "Logical Volume", 00:22:27.016 "block_size": 4096, 00:22:27.016 "num_blocks": 26476544, 00:22:27.016 "uuid": "bfd85bb8-f078-40b3-8d61-dbc588407143", 00:22:27.016 "assigned_rate_limits": { 00:22:27.016 "rw_ios_per_sec": 0, 00:22:27.016 "rw_mbytes_per_sec": 0, 00:22:27.016 "r_mbytes_per_sec": 0, 00:22:27.016 "w_mbytes_per_sec": 0 00:22:27.016 }, 00:22:27.016 "claimed": false, 00:22:27.016 "zoned": false, 00:22:27.016 "supported_io_types": { 00:22:27.016 "read": true, 00:22:27.016 "write": true, 00:22:27.016 "unmap": true, 00:22:27.016 "flush": false, 00:22:27.016 "reset": true, 00:22:27.016 "nvme_admin": false, 00:22:27.016 "nvme_io": false, 00:22:27.016 "nvme_io_md": false, 00:22:27.016 "write_zeroes": true, 00:22:27.016 "zcopy": false, 00:22:27.016 "get_zone_info": false, 00:22:27.016 "zone_management": false, 00:22:27.016 "zone_append": false, 00:22:27.016 "compare": false, 00:22:27.016 "compare_and_write": false, 00:22:27.016 "abort": false, 00:22:27.016 "seek_hole": true, 00:22:27.016 "seek_data": true, 00:22:27.016 "copy": false, 00:22:27.016 "nvme_iov_md": false 00:22:27.016 }, 00:22:27.016 "driver_specific": { 00:22:27.016 "lvol": { 00:22:27.016 "lvol_store_uuid": "836b3555-8fb1-4cfc-8aa1-c48e5a983523", 00:22:27.016 "base_bdev": "nvme0n1", 00:22:27.016 "thin_provision": true, 00:22:27.016 "num_allocated_clusters": 0, 00:22:27.016 "snapshot": false, 00:22:27.016 "clone": false, 00:22:27.016 "esnap_clone": false 00:22:27.016 } 00:22:27.016 } 00:22:27.016 } 00:22:27.016 ]' 00:22:27.016 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:27.016 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:27.016 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:27.016 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:27.016 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:27.016 12:25:57 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:27.016 12:25:57 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:27.016 12:25:57 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d bfd85bb8-f078-40b3-8d61-dbc588407143 --l2p_dram_limit 10' 00:22:27.016 12:25:57 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:27.016 12:25:57 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:27.016 12:25:57 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:27.016 12:25:57 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:27.016 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:27.016 12:25:57 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bfd85bb8-f078-40b3-8d61-dbc588407143 --l2p_dram_limit 10 -c nvc0n1p0 00:22:27.016 [2024-12-05 12:25:57.854539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.016 [2024-12-05 12:25:57.854581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:27.016 [2024-12-05 12:25:57.854594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:27.016 [2024-12-05 12:25:57.854601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.016 [2024-12-05 12:25:57.854658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.016 [2024-12-05 12:25:57.854666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:27.016 [2024-12-05 12:25:57.854674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:27.016 [2024-12-05 12:25:57.854681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.016 [2024-12-05 12:25:57.854707] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:27.016 [2024-12-05 12:25:57.855344] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:27.016 [2024-12-05 12:25:57.855363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.016 [2024-12-05 12:25:57.855369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:27.016 [2024-12-05 12:25:57.855378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:22:27.016 [2024-12-05 12:25:57.855383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.016 [2024-12-05 12:25:57.855414] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 82e30c13-829a-4c2c-aff3-a48d611571c4 00:22:27.016 [2024-12-05 12:25:57.856762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.017 [2024-12-05 12:25:57.856881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:27.017 [2024-12-05 12:25:57.856894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:27.017 [2024-12-05 12:25:57.856905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.017 [2024-12-05 12:25:57.864005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.017 [2024-12-05 12:25:57.864111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:27.017 [2024-12-05 12:25:57.864124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.032 ms 00:22:27.017 [2024-12-05 12:25:57.864133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.017 [2024-12-05 12:25:57.864275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.017 [2024-12-05 12:25:57.864285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:27.017 [2024-12-05 12:25:57.864292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:22:27.017 [2024-12-05 12:25:57.864302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.017 [2024-12-05 12:25:57.864345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.017 [2024-12-05 12:25:57.864356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:27.017 [2024-12-05 12:25:57.864364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:27.017 [2024-12-05 12:25:57.864372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.017 [2024-12-05 12:25:57.864392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:27.017 [2024-12-05 12:25:57.867669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.017 [2024-12-05 12:25:57.867702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:27.017 [2024-12-05 12:25:57.867713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.282 ms 00:22:27.017 [2024-12-05 12:25:57.867720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.017 [2024-12-05 12:25:57.867749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.017 [2024-12-05 12:25:57.867756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:27.017 [2024-12-05 12:25:57.867764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:27.017 [2024-12-05 12:25:57.867770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.017 [2024-12-05 12:25:57.867792] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:27.017 [2024-12-05 12:25:57.867906] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:27.017 [2024-12-05 12:25:57.867921] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:27.017 [2024-12-05 12:25:57.867930] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:27.017 [2024-12-05 12:25:57.867940] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:27.017 [2024-12-05 12:25:57.867947] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:27.017 [2024-12-05 12:25:57.867955] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:27.017 [2024-12-05 12:25:57.867963] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:27.017 [2024-12-05 12:25:57.867972] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:27.017 [2024-12-05 12:25:57.867978] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:27.017 [2024-12-05 12:25:57.867986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.017 [2024-12-05 12:25:57.867998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:27.017 [2024-12-05 12:25:57.868006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:22:27.017 [2024-12-05 12:25:57.868013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.017 [2024-12-05 12:25:57.868080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.017 [2024-12-05 12:25:57.868087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:27.017 [2024-12-05 12:25:57.868094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:27.017 [2024-12-05 12:25:57.868100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.017 [2024-12-05 12:25:57.868182] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:27.017 [2024-12-05 12:25:57.868190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:27.017 [2024-12-05 12:25:57.868199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:27.017 [2024-12-05 12:25:57.868205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.017 [2024-12-05 12:25:57.868213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:27.017 [2024-12-05 12:25:57.868218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:27.017 [2024-12-05 12:25:57.868225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:27.017 [2024-12-05 12:25:57.868231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:27.017 [2024-12-05 12:25:57.868238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:27.017 [2024-12-05 12:25:57.868243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:27.017 [2024-12-05 12:25:57.868251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:27.017 [2024-12-05 12:25:57.868257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:27.017 [2024-12-05 12:25:57.868265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:27.017 [2024-12-05 12:25:57.868270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:27.017 [2024-12-05 12:25:57.868277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:27.017 [2024-12-05 12:25:57.868282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.017 [2024-12-05 12:25:57.868291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:27.017 [2024-12-05 12:25:57.868297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:27.017 [2024-12-05 12:25:57.868306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.017 [2024-12-05 12:25:57.868311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:27.017 [2024-12-05 12:25:57.868319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:27.017 [2024-12-05 12:25:57.868324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.017 [2024-12-05 12:25:57.868331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:27.017 [2024-12-05 12:25:57.868337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:27.017 [2024-12-05 12:25:57.868343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.017 [2024-12-05 12:25:57.868348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:27.017 [2024-12-05 12:25:57.868355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:27.017 [2024-12-05 12:25:57.868361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.017 [2024-12-05 12:25:57.868368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:27.017 [2024-12-05 12:25:57.868373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:27.017 [2024-12-05 12:25:57.868379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:27.017 [2024-12-05 12:25:57.868384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:27.017 [2024-12-05 12:25:57.868393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:27.017 [2024-12-05 12:25:57.868398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:27.017 [2024-12-05 12:25:57.868404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:27.017 [2024-12-05 12:25:57.868409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:27.017 [2024-12-05 12:25:57.868416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:27.017 [2024-12-05 12:25:57.868421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:27.017 [2024-12-05 12:25:57.868428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:27.017 [2024-12-05 12:25:57.868433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.017 [2024-12-05 12:25:57.868440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:27.017 [2024-12-05 12:25:57.868446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:27.017 [2024-12-05 12:25:57.868452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.017 [2024-12-05 12:25:57.868458] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:27.017 [2024-12-05 12:25:57.868482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:27.017 [2024-12-05 12:25:57.868488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:27.017 [2024-12-05 12:25:57.868496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:27.017 [2024-12-05 12:25:57.868502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:27.017 [2024-12-05 12:25:57.868512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:27.017 [2024-12-05 12:25:57.868517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:27.017 [2024-12-05 12:25:57.868524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:27.017 [2024-12-05 12:25:57.868529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:27.017 [2024-12-05 12:25:57.868544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:27.017 [2024-12-05 12:25:57.868553] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:27.017 [2024-12-05 12:25:57.868563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:27.017 [2024-12-05 12:25:57.868570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:27.017 [2024-12-05 12:25:57.868579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:27.017 [2024-12-05 12:25:57.868584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:27.017 [2024-12-05 12:25:57.868591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:27.017 [2024-12-05 12:25:57.868596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:27.017 [2024-12-05 12:25:57.868603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:27.017 [2024-12-05 12:25:57.868609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:27.018 [2024-12-05 12:25:57.868616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:27.018 [2024-12-05 12:25:57.868621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:27.018 [2024-12-05 12:25:57.868631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:27.018 [2024-12-05 12:25:57.868637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:27.018 [2024-12-05 12:25:57.868644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:27.018 [2024-12-05 12:25:57.868651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:27.018 [2024-12-05 12:25:57.868660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:27.018 [2024-12-05 12:25:57.868666] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:27.018 [2024-12-05 12:25:57.868674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:27.018 [2024-12-05 12:25:57.868680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:27.018 [2024-12-05 12:25:57.868688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:27.018 [2024-12-05 12:25:57.868694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:27.018 [2024-12-05 12:25:57.868701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:27.018 [2024-12-05 12:25:57.868707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.018 [2024-12-05 12:25:57.868715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:27.018 [2024-12-05 12:25:57.868722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:22:27.018 [2024-12-05 12:25:57.868729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.018 [2024-12-05 12:25:57.868770] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:27.018 [2024-12-05 12:25:57.868782] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:31.259 [2024-12-05 12:26:01.881214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.259 [2024-12-05 12:26:01.881321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:31.259 [2024-12-05 12:26:01.881343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4012.425 ms 00:22:31.259 [2024-12-05 12:26:01.881356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.259 [2024-12-05 12:26:01.918922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.259 [2024-12-05 12:26:01.919204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:31.259 [2024-12-05 12:26:01.919228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.271 ms 00:22:31.259 [2024-12-05 12:26:01.919240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.259 [2024-12-05 12:26:01.919402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.259 [2024-12-05 12:26:01.919419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:31.259 [2024-12-05 12:26:01.919431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:31.259 [2024-12-05 12:26:01.919452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.259 [2024-12-05 12:26:01.959476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.259 [2024-12-05 12:26:01.959529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:31.259 [2024-12-05 12:26:01.959542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.959 ms 00:22:31.259 [2024-12-05 12:26:01.959554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.259 [2024-12-05 12:26:01.959595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.259 [2024-12-05 12:26:01.959613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:31.259 [2024-12-05 12:26:01.959623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:31.259 [2024-12-05 12:26:01.959643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.259 [2024-12-05 12:26:01.960368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.259 [2024-12-05 12:26:01.960456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:31.259 [2024-12-05 12:26:01.960498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:22:31.259 [2024-12-05 12:26:01.960510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.259 [2024-12-05 12:26:01.960632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.259 [2024-12-05 12:26:01.960647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:31.259 [2024-12-05 12:26:01.960660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:22:31.259 [2024-12-05 12:26:01.960674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.259 [2024-12-05 12:26:01.981634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.259 [2024-12-05 12:26:01.981885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:31.259 [2024-12-05 12:26:01.981907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.936 ms 00:22:31.259 [2024-12-05 12:26:01.981920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.259 [2024-12-05 12:26:02.006929] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:31.259 [2024-12-05 12:26:02.012120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.259 [2024-12-05 12:26:02.012172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:31.259 [2024-12-05 12:26:02.012190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.096 ms 00:22:31.259 [2024-12-05 12:26:02.012200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.259 [2024-12-05 12:26:02.117857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.259 [2024-12-05 12:26:02.117921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:31.259 [2024-12-05 12:26:02.117941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.599 ms 00:22:31.259 [2024-12-05 12:26:02.117951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.259 [2024-12-05 12:26:02.118160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.259 [2024-12-05 12:26:02.118178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:31.259 [2024-12-05 12:26:02.118194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:22:31.259 [2024-12-05 12:26:02.118203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.518 [2024-12-05 12:26:02.144830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.518 [2024-12-05 12:26:02.145025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:31.518 [2024-12-05 12:26:02.145055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.567 ms 00:22:31.518 [2024-12-05 12:26:02.145065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.518 [2024-12-05 12:26:02.170044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.518 [2024-12-05 12:26:02.170093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:31.518 [2024-12-05 12:26:02.170109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.938 ms 00:22:31.518 [2024-12-05 12:26:02.170117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.518 [2024-12-05 12:26:02.170796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.518 [2024-12-05 12:26:02.170817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:31.518 [2024-12-05 12:26:02.170830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:22:31.518 [2024-12-05 12:26:02.170842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.518 [2024-12-05 12:26:02.259530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.518 [2024-12-05 12:26:02.259726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:31.518 [2024-12-05 12:26:02.259758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.641 ms 00:22:31.518 [2024-12-05 12:26:02.259768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.518 [2024-12-05 12:26:02.288522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.518 [2024-12-05 12:26:02.288573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:31.518 [2024-12-05 12:26:02.288590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.621 ms 00:22:31.518 [2024-12-05 12:26:02.288599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.518 [2024-12-05 12:26:02.314845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.518 [2024-12-05 12:26:02.315049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:31.518 [2024-12-05 12:26:02.315077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.190 ms 00:22:31.518 [2024-12-05 12:26:02.315085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.518 [2024-12-05 12:26:02.341183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.518 [2024-12-05 12:26:02.341245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:31.518 [2024-12-05 12:26:02.341262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.019 ms 00:22:31.518 [2024-12-05 12:26:02.341270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.518 [2024-12-05 12:26:02.341327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.518 [2024-12-05 12:26:02.341338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:31.518 [2024-12-05 12:26:02.341353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:31.518 [2024-12-05 12:26:02.341361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.518 [2024-12-05 12:26:02.341493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.518 [2024-12-05 12:26:02.341510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:31.518 [2024-12-05 12:26:02.341523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:31.518 [2024-12-05 12:26:02.341533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.518 [2024-12-05 12:26:02.342947] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4487.787 ms, result 0 00:22:31.518 { 00:22:31.518 "name": "ftl0", 00:22:31.518 "uuid": "82e30c13-829a-4c2c-aff3-a48d611571c4" 00:22:31.518 } 00:22:31.518 12:26:02 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:31.518 12:26:02 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:31.778 12:26:02 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:31.778 12:26:02 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:32.039 [2024-12-05 12:26:02.709938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.039 [2024-12-05 12:26:02.709989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:32.039 [2024-12-05 12:26:02.710002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:32.039 [2024-12-05 12:26:02.710012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.039 [2024-12-05 12:26:02.710037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:32.039 [2024-12-05 12:26:02.712912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.039 [2024-12-05 12:26:02.713058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:32.039 [2024-12-05 12:26:02.713080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.855 ms 00:22:32.039 [2024-12-05 12:26:02.713088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.039 [2024-12-05 12:26:02.713379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.039 [2024-12-05 12:26:02.713393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:32.039 [2024-12-05 12:26:02.713404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:22:32.039 [2024-12-05 12:26:02.713412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.039 [2024-12-05 12:26:02.716672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.039 [2024-12-05 12:26:02.716692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:32.039 [2024-12-05 12:26:02.716703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.242 ms 00:22:32.039 [2024-12-05 12:26:02.716712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.039 [2024-12-05 12:26:02.722913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.039 [2024-12-05 12:26:02.722941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:32.039 [2024-12-05 12:26:02.722956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.182 ms 00:22:32.039 [2024-12-05 12:26:02.722964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.039 [2024-12-05 12:26:02.747986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.039 [2024-12-05 12:26:02.748107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:32.039 [2024-12-05 12:26:02.748128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.965 ms 00:22:32.039 [2024-12-05 12:26:02.748136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.039 [2024-12-05 12:26:02.764900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.039 [2024-12-05 12:26:02.764935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:32.039 [2024-12-05 12:26:02.764949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.725 ms 00:22:32.039 [2024-12-05 12:26:02.764957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.039 [2024-12-05 12:26:02.765107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.039 [2024-12-05 12:26:02.765119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:32.039 [2024-12-05 12:26:02.765141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:22:32.039 [2024-12-05 12:26:02.765149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.039 [2024-12-05 12:26:02.788994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.039 [2024-12-05 12:26:02.789027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:32.039 [2024-12-05 12:26:02.789040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.820 ms 00:22:32.039 [2024-12-05 12:26:02.789048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.039 [2024-12-05 12:26:02.812968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.039 [2024-12-05 12:26:02.813004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:32.039 [2024-12-05 12:26:02.813017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.878 ms 00:22:32.039 [2024-12-05 12:26:02.813025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.039 [2024-12-05 12:26:02.836953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.039 [2024-12-05 12:26:02.836994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:32.039 [2024-12-05 12:26:02.837008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.879 ms 00:22:32.039 [2024-12-05 12:26:02.837016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.039 [2024-12-05 12:26:02.861767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.039 [2024-12-05 12:26:02.861825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:32.039 [2024-12-05 12:26:02.861839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.657 ms 00:22:32.039 [2024-12-05 12:26:02.861847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.039 [2024-12-05 12:26:02.861896] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:32.039 [2024-12-05 12:26:02.861914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.861931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.861939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.861950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.861958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.861969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.861976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.861990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.861998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.862010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.862019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.862029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.862037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.862047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.862055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.862066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.862073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.862083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.862091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.862104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.862111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:32.039 [2024-12-05 12:26:02.862121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:32.040 [2024-12-05 12:26:02.862910] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:32.040 [2024-12-05 12:26:02.862920] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 82e30c13-829a-4c2c-aff3-a48d611571c4 00:22:32.040 [2024-12-05 12:26:02.862929] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:32.040 [2024-12-05 12:26:02.862942] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:32.040 [2024-12-05 12:26:02.862955] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:32.040 [2024-12-05 12:26:02.862965] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:32.040 [2024-12-05 12:26:02.862973] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:32.040 [2024-12-05 12:26:02.862984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:32.040 [2024-12-05 12:26:02.862991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:32.040 [2024-12-05 12:26:02.863000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:32.040 [2024-12-05 12:26:02.863009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:32.040 [2024-12-05 12:26:02.863019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.040 [2024-12-05 12:26:02.863027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:32.040 [2024-12-05 12:26:02.863038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:22:32.040 [2024-12-05 12:26:02.863049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.040 [2024-12-05 12:26:02.877817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.040 [2024-12-05 12:26:02.877858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:32.040 [2024-12-05 12:26:02.877872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.724 ms 00:22:32.040 [2024-12-05 12:26:02.877881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.040 [2024-12-05 12:26:02.878321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.040 [2024-12-05 12:26:02.878348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:32.040 [2024-12-05 12:26:02.878364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:22:32.040 [2024-12-05 12:26:02.878372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.301 [2024-12-05 12:26:02.928258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.301 [2024-12-05 12:26:02.928308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:32.301 [2024-12-05 12:26:02.928324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.301 [2024-12-05 12:26:02.928333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.301 [2024-12-05 12:26:02.928412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.301 [2024-12-05 12:26:02.928422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:32.301 [2024-12-05 12:26:02.928438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.301 [2024-12-05 12:26:02.928447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.301 [2024-12-05 12:26:02.928563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.301 [2024-12-05 12:26:02.928578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:32.301 [2024-12-05 12:26:02.928590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.301 [2024-12-05 12:26:02.928600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.301 [2024-12-05 12:26:02.928630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.301 [2024-12-05 12:26:02.928641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:32.301 [2024-12-05 12:26:02.928654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.301 [2024-12-05 12:26:02.928666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.301 [2024-12-05 12:26:03.018648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.301 [2024-12-05 12:26:03.018709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:32.301 [2024-12-05 12:26:03.018727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.301 [2024-12-05 12:26:03.018736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.301 [2024-12-05 12:26:03.092543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.301 [2024-12-05 12:26:03.092604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:32.301 [2024-12-05 12:26:03.092620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.301 [2024-12-05 12:26:03.092633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.301 [2024-12-05 12:26:03.092768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.301 [2024-12-05 12:26:03.092781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:32.301 [2024-12-05 12:26:03.092795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.301 [2024-12-05 12:26:03.092804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.301 [2024-12-05 12:26:03.092862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.301 [2024-12-05 12:26:03.092874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:32.301 [2024-12-05 12:26:03.092886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.301 [2024-12-05 12:26:03.092895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.301 [2024-12-05 12:26:03.093013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.301 [2024-12-05 12:26:03.093026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:32.301 [2024-12-05 12:26:03.093038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.301 [2024-12-05 12:26:03.093046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.301 [2024-12-05 12:26:03.093085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.301 [2024-12-05 12:26:03.093096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:32.301 [2024-12-05 12:26:03.093108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.301 [2024-12-05 12:26:03.093117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.301 [2024-12-05 12:26:03.093204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.301 [2024-12-05 12:26:03.093216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:32.301 [2024-12-05 12:26:03.093227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.301 [2024-12-05 12:26:03.093239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.301 [2024-12-05 12:26:03.093305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.301 [2024-12-05 12:26:03.093319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:32.301 [2024-12-05 12:26:03.093330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.301 [2024-12-05 12:26:03.093338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.301 [2024-12-05 12:26:03.093550] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 383.522 ms, result 0 00:22:32.301 true 00:22:32.301 12:26:03 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77699 00:22:32.301 12:26:03 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77699 ']' 00:22:32.301 12:26:03 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77699 00:22:32.301 12:26:03 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:22:32.301 12:26:03 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.301 12:26:03 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77699 00:22:32.301 12:26:03 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.301 killing process with pid 77699 00:22:32.301 12:26:03 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.301 12:26:03 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77699' 00:22:32.301 12:26:03 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77699 00:22:32.301 12:26:03 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77699 00:22:38.878 12:26:09 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:43.074 262144+0 records in 00:22:43.074 262144+0 records out 00:22:43.074 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.91429 s, 274 MB/s 00:22:43.074 12:26:13 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:44.011 12:26:14 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:44.011 [2024-12-05 12:26:14.813092] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:22:44.011 [2024-12-05 12:26:14.813212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77932 ] 00:22:44.272 [2024-12-05 12:26:14.971208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.272 [2024-12-05 12:26:15.097555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.841 [2024-12-05 12:26:15.433667] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:44.841 [2024-12-05 12:26:15.433767] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:44.841 [2024-12-05 12:26:15.598177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.841 [2024-12-05 12:26:15.598250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:44.841 [2024-12-05 12:26:15.598267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:44.841 [2024-12-05 12:26:15.598276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.841 [2024-12-05 12:26:15.598333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.841 [2024-12-05 12:26:15.598347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:44.842 [2024-12-05 12:26:15.598356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:44.842 [2024-12-05 12:26:15.598365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.842 [2024-12-05 12:26:15.598386] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:44.842 [2024-12-05 12:26:15.599144] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:44.842 [2024-12-05 12:26:15.599174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.842 [2024-12-05 12:26:15.599183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:44.842 [2024-12-05 12:26:15.599194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:22:44.842 [2024-12-05 12:26:15.599203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.842 [2024-12-05 12:26:15.601531] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:44.842 [2024-12-05 12:26:15.617002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.842 [2024-12-05 12:26:15.617059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:44.842 [2024-12-05 12:26:15.617074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.474 ms 00:22:44.842 [2024-12-05 12:26:15.617083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.842 [2024-12-05 12:26:15.617176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.842 [2024-12-05 12:26:15.617187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:44.842 [2024-12-05 12:26:15.617196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:44.842 [2024-12-05 12:26:15.617204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.842 [2024-12-05 12:26:15.628915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.842 [2024-12-05 12:26:15.628961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:44.842 [2024-12-05 12:26:15.628973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.630 ms 00:22:44.842 [2024-12-05 12:26:15.628989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.842 [2024-12-05 12:26:15.629076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.842 [2024-12-05 12:26:15.629086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:44.842 [2024-12-05 12:26:15.629096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:44.842 [2024-12-05 12:26:15.629104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.842 [2024-12-05 12:26:15.629179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.842 [2024-12-05 12:26:15.629191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:44.842 [2024-12-05 12:26:15.629201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:44.842 [2024-12-05 12:26:15.629209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.842 [2024-12-05 12:26:15.629237] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:44.842 [2024-12-05 12:26:15.633830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.842 [2024-12-05 12:26:15.633874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:44.842 [2024-12-05 12:26:15.633890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.598 ms 00:22:44.842 [2024-12-05 12:26:15.633899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.842 [2024-12-05 12:26:15.633941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.842 [2024-12-05 12:26:15.633950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:44.842 [2024-12-05 12:26:15.633959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:44.842 [2024-12-05 12:26:15.633968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.842 [2024-12-05 12:26:15.634007] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:44.842 [2024-12-05 12:26:15.634035] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:44.842 [2024-12-05 12:26:15.634078] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:44.842 [2024-12-05 12:26:15.634100] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:44.842 [2024-12-05 12:26:15.634213] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:44.842 [2024-12-05 12:26:15.634226] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:44.842 [2024-12-05 12:26:15.634238] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:44.842 [2024-12-05 12:26:15.634249] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:44.842 [2024-12-05 12:26:15.634260] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:44.842 [2024-12-05 12:26:15.634270] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:44.842 [2024-12-05 12:26:15.634281] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:44.842 [2024-12-05 12:26:15.634292] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:44.842 [2024-12-05 12:26:15.634302] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:44.842 [2024-12-05 12:26:15.634311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.842 [2024-12-05 12:26:15.634322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:44.842 [2024-12-05 12:26:15.634331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:22:44.842 [2024-12-05 12:26:15.634338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.842 [2024-12-05 12:26:15.634423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.842 [2024-12-05 12:26:15.634434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:44.842 [2024-12-05 12:26:15.634442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:44.842 [2024-12-05 12:26:15.634450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.842 [2024-12-05 12:26:15.634584] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:44.842 [2024-12-05 12:26:15.634599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:44.842 [2024-12-05 12:26:15.634610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:44.842 [2024-12-05 12:26:15.634620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.842 [2024-12-05 12:26:15.634629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:44.842 [2024-12-05 12:26:15.634638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:44.842 [2024-12-05 12:26:15.634645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:44.842 [2024-12-05 12:26:15.634653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:44.842 [2024-12-05 12:26:15.634661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:44.842 [2024-12-05 12:26:15.634668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:44.842 [2024-12-05 12:26:15.634677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:44.842 [2024-12-05 12:26:15.634686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:44.842 [2024-12-05 12:26:15.634694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:44.842 [2024-12-05 12:26:15.634709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:44.842 [2024-12-05 12:26:15.634718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:44.842 [2024-12-05 12:26:15.634726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.842 [2024-12-05 12:26:15.634734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:44.842 [2024-12-05 12:26:15.634742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:44.842 [2024-12-05 12:26:15.634749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.842 [2024-12-05 12:26:15.634755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:44.842 [2024-12-05 12:26:15.634763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:44.842 [2024-12-05 12:26:15.634769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.842 [2024-12-05 12:26:15.634776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:44.842 [2024-12-05 12:26:15.634784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:44.842 [2024-12-05 12:26:15.634791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.842 [2024-12-05 12:26:15.634797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:44.842 [2024-12-05 12:26:15.634803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:44.842 [2024-12-05 12:26:15.634811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.842 [2024-12-05 12:26:15.634818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:44.842 [2024-12-05 12:26:15.634824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:44.842 [2024-12-05 12:26:15.634833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:44.842 [2024-12-05 12:26:15.634840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:44.842 [2024-12-05 12:26:15.634847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:44.842 [2024-12-05 12:26:15.634853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:44.842 [2024-12-05 12:26:15.634860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:44.842 [2024-12-05 12:26:15.634867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:44.842 [2024-12-05 12:26:15.634873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:44.842 [2024-12-05 12:26:15.634881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:44.842 [2024-12-05 12:26:15.634889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:44.842 [2024-12-05 12:26:15.634896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.842 [2024-12-05 12:26:15.634902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:44.842 [2024-12-05 12:26:15.634910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:44.842 [2024-12-05 12:26:15.634919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.842 [2024-12-05 12:26:15.634926] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:44.842 [2024-12-05 12:26:15.634934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:44.842 [2024-12-05 12:26:15.634942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:44.842 [2024-12-05 12:26:15.634954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.842 [2024-12-05 12:26:15.634963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:44.842 [2024-12-05 12:26:15.634970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:44.842 [2024-12-05 12:26:15.634977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:44.842 [2024-12-05 12:26:15.634984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:44.842 [2024-12-05 12:26:15.634991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:44.842 [2024-12-05 12:26:15.635000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:44.842 [2024-12-05 12:26:15.635010] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:44.842 [2024-12-05 12:26:15.635019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:44.842 [2024-12-05 12:26:15.635031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:44.842 [2024-12-05 12:26:15.635038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:44.842 [2024-12-05 12:26:15.635046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:44.842 [2024-12-05 12:26:15.635055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:44.842 [2024-12-05 12:26:15.635063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:44.842 [2024-12-05 12:26:15.635070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:44.842 [2024-12-05 12:26:15.635077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:44.842 [2024-12-05 12:26:15.635084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:44.842 [2024-12-05 12:26:15.635091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:44.842 [2024-12-05 12:26:15.635099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:44.842 [2024-12-05 12:26:15.635107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:44.842 [2024-12-05 12:26:15.635115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:44.842 [2024-12-05 12:26:15.635121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:44.842 [2024-12-05 12:26:15.635129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:44.842 [2024-12-05 12:26:15.635136] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:44.842 [2024-12-05 12:26:15.635145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:44.842 [2024-12-05 12:26:15.635156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:44.842 [2024-12-05 12:26:15.635164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:44.842 [2024-12-05 12:26:15.635172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:44.842 [2024-12-05 12:26:15.635180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:44.842 [2024-12-05 12:26:15.635188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.842 [2024-12-05 12:26:15.635197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:44.842 [2024-12-05 12:26:15.635206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:22:44.842 [2024-12-05 12:26:15.635216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.842 [2024-12-05 12:26:15.673771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.842 [2024-12-05 12:26:15.673824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:44.842 [2024-12-05 12:26:15.673838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.503 ms 00:22:44.842 [2024-12-05 12:26:15.673851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.842 [2024-12-05 12:26:15.673949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.842 [2024-12-05 12:26:15.673959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:44.842 [2024-12-05 12:26:15.673967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:44.842 [2024-12-05 12:26:15.673976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.721305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.721367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:45.102 [2024-12-05 12:26:15.721383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.264 ms 00:22:45.102 [2024-12-05 12:26:15.721394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.721446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.721459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:45.102 [2024-12-05 12:26:15.721486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:45.102 [2024-12-05 12:26:15.721495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.722256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.722307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:45.102 [2024-12-05 12:26:15.722319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:22:45.102 [2024-12-05 12:26:15.722329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.722527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.722542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:45.102 [2024-12-05 12:26:15.722561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:22:45.102 [2024-12-05 12:26:15.722570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.740930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.740985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:45.102 [2024-12-05 12:26:15.740998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.336 ms 00:22:45.102 [2024-12-05 12:26:15.741007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.756458] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:45.102 [2024-12-05 12:26:15.756528] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:45.102 [2024-12-05 12:26:15.756544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.756555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:45.102 [2024-12-05 12:26:15.756565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.398 ms 00:22:45.102 [2024-12-05 12:26:15.756574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.782892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.782951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:45.102 [2024-12-05 12:26:15.782964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.262 ms 00:22:45.102 [2024-12-05 12:26:15.782973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.795949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.796017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:45.102 [2024-12-05 12:26:15.796029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.915 ms 00:22:45.102 [2024-12-05 12:26:15.796038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.808738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.808789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:45.102 [2024-12-05 12:26:15.808801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.652 ms 00:22:45.102 [2024-12-05 12:26:15.808810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.809525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.809563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:45.102 [2024-12-05 12:26:15.809575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:22:45.102 [2024-12-05 12:26:15.809587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.880345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.880420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:45.102 [2024-12-05 12:26:15.880437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.735 ms 00:22:45.102 [2024-12-05 12:26:15.880455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.892040] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:45.102 [2024-12-05 12:26:15.895616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.895661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:45.102 [2024-12-05 12:26:15.895674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.079 ms 00:22:45.102 [2024-12-05 12:26:15.895683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.895779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.895792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:45.102 [2024-12-05 12:26:15.895803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:45.102 [2024-12-05 12:26:15.895812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.895904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.895918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:45.102 [2024-12-05 12:26:15.895927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:45.102 [2024-12-05 12:26:15.895936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.102 [2024-12-05 12:26:15.895962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.102 [2024-12-05 12:26:15.895973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:45.103 [2024-12-05 12:26:15.895983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:45.103 [2024-12-05 12:26:15.895992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.103 [2024-12-05 12:26:15.896037] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:45.103 [2024-12-05 12:26:15.896051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.103 [2024-12-05 12:26:15.896063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:45.103 [2024-12-05 12:26:15.896074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:45.103 [2024-12-05 12:26:15.896083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.103 [2024-12-05 12:26:15.922813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.103 [2024-12-05 12:26:15.922870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:45.103 [2024-12-05 12:26:15.922884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.710 ms 00:22:45.103 [2024-12-05 12:26:15.922900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.103 [2024-12-05 12:26:15.923003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.103 [2024-12-05 12:26:15.923014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:45.103 [2024-12-05 12:26:15.923024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:45.103 [2024-12-05 12:26:15.923033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.103 [2024-12-05 12:26:15.924609] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 325.790 ms, result 0 00:22:46.479  [2024-12-05T12:26:18.288Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-05T12:26:19.234Z] Copying: 29/1024 [MB] (16 MBps) [2024-12-05T12:26:20.168Z] Copying: 44/1024 [MB] (15 MBps) [2024-12-05T12:26:21.111Z] Copying: 70/1024 [MB] (26 MBps) [2024-12-05T12:26:22.055Z] Copying: 86/1024 [MB] (16 MBps) [2024-12-05T12:26:22.996Z] Copying: 98/1024 [MB] (11 MBps) [2024-12-05T12:26:23.941Z] Copying: 118/1024 [MB] (19 MBps) [2024-12-05T12:26:25.324Z] Copying: 134/1024 [MB] (16 MBps) [2024-12-05T12:26:26.263Z] Copying: 144/1024 [MB] (10 MBps) [2024-12-05T12:26:27.202Z] Copying: 156/1024 [MB] (11 MBps) [2024-12-05T12:26:28.143Z] Copying: 166/1024 [MB] (10 MBps) [2024-12-05T12:26:29.085Z] Copying: 180/1024 [MB] (13 MBps) [2024-12-05T12:26:30.026Z] Copying: 195/1024 [MB] (14 MBps) [2024-12-05T12:26:30.959Z] Copying: 205/1024 [MB] (10 MBps) [2024-12-05T12:26:32.333Z] Copying: 224/1024 [MB] (18 MBps) [2024-12-05T12:26:33.269Z] Copying: 248/1024 [MB] (24 MBps) [2024-12-05T12:26:34.272Z] Copying: 269/1024 [MB] (20 MBps) [2024-12-05T12:26:35.212Z] Copying: 282/1024 [MB] (13 MBps) [2024-12-05T12:26:36.179Z] Copying: 295/1024 [MB] (13 MBps) [2024-12-05T12:26:37.120Z] Copying: 311/1024 [MB] (15 MBps) [2024-12-05T12:26:38.063Z] Copying: 327/1024 [MB] (16 MBps) [2024-12-05T12:26:39.005Z] Copying: 349/1024 [MB] (21 MBps) [2024-12-05T12:26:39.946Z] Copying: 372/1024 [MB] (23 MBps) [2024-12-05T12:26:41.331Z] Copying: 395/1024 [MB] (22 MBps) [2024-12-05T12:26:42.272Z] Copying: 415/1024 [MB] (19 MBps) [2024-12-05T12:26:43.213Z] Copying: 438/1024 [MB] (22 MBps) [2024-12-05T12:26:44.153Z] Copying: 456/1024 [MB] (18 MBps) [2024-12-05T12:26:45.096Z] Copying: 488/1024 [MB] (31 MBps) [2024-12-05T12:26:46.032Z] Copying: 515/1024 [MB] (26 MBps) [2024-12-05T12:26:46.972Z] Copying: 542/1024 [MB] (26 MBps) [2024-12-05T12:26:48.358Z] Copying: 567/1024 [MB] (25 MBps) [2024-12-05T12:26:49.301Z] Copying: 589/1024 [MB] (22 MBps) [2024-12-05T12:26:50.241Z] Copying: 612/1024 [MB] (22 MBps) [2024-12-05T12:26:51.179Z] Copying: 634/1024 [MB] (22 MBps) [2024-12-05T12:26:52.115Z] Copying: 659/1024 [MB] (24 MBps) [2024-12-05T12:26:53.058Z] Copying: 679/1024 [MB] (20 MBps) [2024-12-05T12:26:54.003Z] Copying: 696/1024 [MB] (16 MBps) [2024-12-05T12:26:54.948Z] Copying: 708/1024 [MB] (12 MBps) [2024-12-05T12:26:56.327Z] Copying: 735992/1048576 [kB] (10096 kBps) [2024-12-05T12:26:57.268Z] Copying: 731/1024 [MB] (12 MBps) [2024-12-05T12:26:58.201Z] Copying: 754/1024 [MB] (23 MBps) [2024-12-05T12:26:59.136Z] Copying: 772/1024 [MB] (17 MBps) [2024-12-05T12:27:00.070Z] Copying: 795/1024 [MB] (23 MBps) [2024-12-05T12:27:01.003Z] Copying: 815/1024 [MB] (20 MBps) [2024-12-05T12:27:02.376Z] Copying: 834/1024 [MB] (18 MBps) [2024-12-05T12:27:03.011Z] Copying: 855/1024 [MB] (21 MBps) [2024-12-05T12:27:03.952Z] Copying: 878/1024 [MB] (23 MBps) [2024-12-05T12:27:05.328Z] Copying: 896/1024 [MB] (17 MBps) [2024-12-05T12:27:06.263Z] Copying: 919/1024 [MB] (23 MBps) [2024-12-05T12:27:07.200Z] Copying: 937/1024 [MB] (18 MBps) [2024-12-05T12:27:08.141Z] Copying: 951/1024 [MB] (13 MBps) [2024-12-05T12:27:09.080Z] Copying: 968/1024 [MB] (17 MBps) [2024-12-05T12:27:10.014Z] Copying: 978/1024 [MB] (10 MBps) [2024-12-05T12:27:10.947Z] Copying: 994/1024 [MB] (16 MBps) [2024-12-05T12:27:10.947Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-05 12:27:10.915760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.078 [2024-12-05 12:27:10.915802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:40.078 [2024-12-05 12:27:10.915816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:40.078 [2024-12-05 12:27:10.915824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.078 [2024-12-05 12:27:10.915841] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:40.078 [2024-12-05 12:27:10.918182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.078 [2024-12-05 12:27:10.918208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:40.078 [2024-12-05 12:27:10.918222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.328 ms 00:23:40.078 [2024-12-05 12:27:10.918229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.078 [2024-12-05 12:27:10.919863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.078 [2024-12-05 12:27:10.919888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:40.078 [2024-12-05 12:27:10.919896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.616 ms 00:23:40.078 [2024-12-05 12:27:10.919903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.078 [2024-12-05 12:27:10.935083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.078 [2024-12-05 12:27:10.935115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:40.078 [2024-12-05 12:27:10.935124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.169 ms 00:23:40.078 [2024-12-05 12:27:10.935132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.078 [2024-12-05 12:27:10.939878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.078 [2024-12-05 12:27:10.939898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:40.078 [2024-12-05 12:27:10.939907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.716 ms 00:23:40.078 [2024-12-05 12:27:10.939914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.338 [2024-12-05 12:27:10.959661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.338 [2024-12-05 12:27:10.959684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:40.338 [2024-12-05 12:27:10.959693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.703 ms 00:23:40.338 [2024-12-05 12:27:10.959699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.338 [2024-12-05 12:27:10.971569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.338 [2024-12-05 12:27:10.971591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:40.338 [2024-12-05 12:27:10.971600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.844 ms 00:23:40.338 [2024-12-05 12:27:10.971607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.338 [2024-12-05 12:27:10.971698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.338 [2024-12-05 12:27:10.971709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:40.338 [2024-12-05 12:27:10.971716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:40.338 [2024-12-05 12:27:10.971722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.338 [2024-12-05 12:27:10.990164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.338 [2024-12-05 12:27:10.990184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:40.338 [2024-12-05 12:27:10.990192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.431 ms 00:23:40.338 [2024-12-05 12:27:10.990197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.338 [2024-12-05 12:27:11.008136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.338 [2024-12-05 12:27:11.008159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:40.338 [2024-12-05 12:27:11.008166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.914 ms 00:23:40.338 [2024-12-05 12:27:11.008172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.338 [2024-12-05 12:27:11.025988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.338 [2024-12-05 12:27:11.026010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:40.338 [2024-12-05 12:27:11.026018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.792 ms 00:23:40.338 [2024-12-05 12:27:11.026023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.338 [2024-12-05 12:27:11.043254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.338 [2024-12-05 12:27:11.043276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:40.338 [2024-12-05 12:27:11.043283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.187 ms 00:23:40.338 [2024-12-05 12:27:11.043288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.338 [2024-12-05 12:27:11.043313] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:40.338 [2024-12-05 12:27:11.043325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:40.338 [2024-12-05 12:27:11.043479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:40.339 [2024-12-05 12:27:11.043930] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:40.339 [2024-12-05 12:27:11.043939] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 82e30c13-829a-4c2c-aff3-a48d611571c4 00:23:40.339 [2024-12-05 12:27:11.043945] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:40.339 [2024-12-05 12:27:11.043950] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:40.339 [2024-12-05 12:27:11.043956] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:40.339 [2024-12-05 12:27:11.043962] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:40.339 [2024-12-05 12:27:11.043968] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:40.339 [2024-12-05 12:27:11.044002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:40.339 [2024-12-05 12:27:11.044008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:40.339 [2024-12-05 12:27:11.044013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:40.339 [2024-12-05 12:27:11.044019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:40.339 [2024-12-05 12:27:11.044024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.339 [2024-12-05 12:27:11.044033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:40.339 [2024-12-05 12:27:11.044039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:23:40.339 [2024-12-05 12:27:11.044045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.339 [2024-12-05 12:27:11.054257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.339 [2024-12-05 12:27:11.054273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:40.340 [2024-12-05 12:27:11.054281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.197 ms 00:23:40.340 [2024-12-05 12:27:11.054288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.340 [2024-12-05 12:27:11.054596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.340 [2024-12-05 12:27:11.054605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:40.340 [2024-12-05 12:27:11.054611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:23:40.340 [2024-12-05 12:27:11.054621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.340 [2024-12-05 12:27:11.082111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.340 [2024-12-05 12:27:11.082134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.340 [2024-12-05 12:27:11.082142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.340 [2024-12-05 12:27:11.082149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.340 [2024-12-05 12:27:11.082193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.340 [2024-12-05 12:27:11.082200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.340 [2024-12-05 12:27:11.082206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.340 [2024-12-05 12:27:11.082215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.340 [2024-12-05 12:27:11.082258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.340 [2024-12-05 12:27:11.082268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.340 [2024-12-05 12:27:11.082274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.340 [2024-12-05 12:27:11.082280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.340 [2024-12-05 12:27:11.082292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.340 [2024-12-05 12:27:11.082299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.340 [2024-12-05 12:27:11.082305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.340 [2024-12-05 12:27:11.082310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.340 [2024-12-05 12:27:11.144632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.340 [2024-12-05 12:27:11.144661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.340 [2024-12-05 12:27:11.144670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.340 [2024-12-05 12:27:11.144677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.340 [2024-12-05 12:27:11.195698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.340 [2024-12-05 12:27:11.195731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.340 [2024-12-05 12:27:11.195742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.340 [2024-12-05 12:27:11.195752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.340 [2024-12-05 12:27:11.195819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.340 [2024-12-05 12:27:11.195828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.340 [2024-12-05 12:27:11.195834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.340 [2024-12-05 12:27:11.195841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.340 [2024-12-05 12:27:11.195870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.340 [2024-12-05 12:27:11.195877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.340 [2024-12-05 12:27:11.195884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.340 [2024-12-05 12:27:11.195890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.340 [2024-12-05 12:27:11.195969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.340 [2024-12-05 12:27:11.195978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.340 [2024-12-05 12:27:11.195985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.340 [2024-12-05 12:27:11.195990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.340 [2024-12-05 12:27:11.196015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.340 [2024-12-05 12:27:11.196022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:40.340 [2024-12-05 12:27:11.196028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.340 [2024-12-05 12:27:11.196034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.340 [2024-12-05 12:27:11.196066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.340 [2024-12-05 12:27:11.196076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.340 [2024-12-05 12:27:11.196083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.340 [2024-12-05 12:27:11.196089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.340 [2024-12-05 12:27:11.196126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:40.340 [2024-12-05 12:27:11.196134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.340 [2024-12-05 12:27:11.196140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:40.340 [2024-12-05 12:27:11.196146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.340 [2024-12-05 12:27:11.196255] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 280.464 ms, result 0 00:23:41.276 00:23:41.276 00:23:41.276 12:27:12 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:41.276 [2024-12-05 12:27:12.140571] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:23:41.276 [2024-12-05 12:27:12.140703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78560 ] 00:23:41.535 [2024-12-05 12:27:12.299117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.535 [2024-12-05 12:27:12.397031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.793 [2024-12-05 12:27:12.630253] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:41.793 [2024-12-05 12:27:12.630312] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:42.052 [2024-12-05 12:27:12.783387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.052 [2024-12-05 12:27:12.783429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:42.052 [2024-12-05 12:27:12.783441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:42.052 [2024-12-05 12:27:12.783447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.052 [2024-12-05 12:27:12.783496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.052 [2024-12-05 12:27:12.783507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:42.052 [2024-12-05 12:27:12.783514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:42.052 [2024-12-05 12:27:12.783520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.052 [2024-12-05 12:27:12.783534] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:42.052 [2024-12-05 12:27:12.784080] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:42.052 [2024-12-05 12:27:12.784100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.052 [2024-12-05 12:27:12.784106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:42.052 [2024-12-05 12:27:12.784115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:23:42.052 [2024-12-05 12:27:12.784121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.052 [2024-12-05 12:27:12.785397] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:42.052 [2024-12-05 12:27:12.795796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.052 [2024-12-05 12:27:12.795835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:42.052 [2024-12-05 12:27:12.795845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.400 ms 00:23:42.053 [2024-12-05 12:27:12.795852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.053 [2024-12-05 12:27:12.795908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.053 [2024-12-05 12:27:12.795916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:42.053 [2024-12-05 12:27:12.795922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:42.053 [2024-12-05 12:27:12.795928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.053 [2024-12-05 12:27:12.802182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.053 [2024-12-05 12:27:12.802209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:42.053 [2024-12-05 12:27:12.802218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.207 ms 00:23:42.053 [2024-12-05 12:27:12.802227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.053 [2024-12-05 12:27:12.802285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.053 [2024-12-05 12:27:12.802292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:42.053 [2024-12-05 12:27:12.802299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:42.053 [2024-12-05 12:27:12.802304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.053 [2024-12-05 12:27:12.802337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.053 [2024-12-05 12:27:12.802346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:42.053 [2024-12-05 12:27:12.802353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:42.053 [2024-12-05 12:27:12.802359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.053 [2024-12-05 12:27:12.802376] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:42.053 [2024-12-05 12:27:12.805510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.053 [2024-12-05 12:27:12.805536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:42.053 [2024-12-05 12:27:12.805545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.138 ms 00:23:42.053 [2024-12-05 12:27:12.805552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.053 [2024-12-05 12:27:12.805580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.053 [2024-12-05 12:27:12.805586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:42.053 [2024-12-05 12:27:12.805593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:42.053 [2024-12-05 12:27:12.805599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.053 [2024-12-05 12:27:12.805614] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:42.053 [2024-12-05 12:27:12.805631] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:42.053 [2024-12-05 12:27:12.805660] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:42.053 [2024-12-05 12:27:12.805676] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:42.053 [2024-12-05 12:27:12.805759] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:42.053 [2024-12-05 12:27:12.805770] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:42.053 [2024-12-05 12:27:12.805778] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:42.053 [2024-12-05 12:27:12.805786] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:42.053 [2024-12-05 12:27:12.805794] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:42.053 [2024-12-05 12:27:12.805800] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:42.053 [2024-12-05 12:27:12.805806] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:42.053 [2024-12-05 12:27:12.805815] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:42.053 [2024-12-05 12:27:12.805821] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:42.053 [2024-12-05 12:27:12.805827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.053 [2024-12-05 12:27:12.805834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:42.053 [2024-12-05 12:27:12.805840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:23:42.053 [2024-12-05 12:27:12.805846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.053 [2024-12-05 12:27:12.805910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.053 [2024-12-05 12:27:12.805917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:42.053 [2024-12-05 12:27:12.805923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:42.053 [2024-12-05 12:27:12.805928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.053 [2024-12-05 12:27:12.806013] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:42.053 [2024-12-05 12:27:12.806028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:42.053 [2024-12-05 12:27:12.806035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:42.053 [2024-12-05 12:27:12.806041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.053 [2024-12-05 12:27:12.806047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:42.053 [2024-12-05 12:27:12.806053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:42.053 [2024-12-05 12:27:12.806059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:42.053 [2024-12-05 12:27:12.806064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:42.053 [2024-12-05 12:27:12.806069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:42.053 [2024-12-05 12:27:12.806075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:42.053 [2024-12-05 12:27:12.806080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:42.053 [2024-12-05 12:27:12.806086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:42.053 [2024-12-05 12:27:12.806092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:42.053 [2024-12-05 12:27:12.806104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:42.053 [2024-12-05 12:27:12.806110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:42.053 [2024-12-05 12:27:12.806116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.053 [2024-12-05 12:27:12.806121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:42.053 [2024-12-05 12:27:12.806126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:42.053 [2024-12-05 12:27:12.806132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.053 [2024-12-05 12:27:12.806137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:42.053 [2024-12-05 12:27:12.806143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:42.053 [2024-12-05 12:27:12.806148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.053 [2024-12-05 12:27:12.806153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:42.053 [2024-12-05 12:27:12.806158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:42.053 [2024-12-05 12:27:12.806163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.053 [2024-12-05 12:27:12.806168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:42.053 [2024-12-05 12:27:12.806173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:42.053 [2024-12-05 12:27:12.806179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.053 [2024-12-05 12:27:12.806184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:42.053 [2024-12-05 12:27:12.806190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:42.053 [2024-12-05 12:27:12.806195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.053 [2024-12-05 12:27:12.806201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:42.053 [2024-12-05 12:27:12.806206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:42.053 [2024-12-05 12:27:12.806211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:42.053 [2024-12-05 12:27:12.806216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:42.053 [2024-12-05 12:27:12.806221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:42.053 [2024-12-05 12:27:12.806226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:42.053 [2024-12-05 12:27:12.806232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:42.053 [2024-12-05 12:27:12.806238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:42.053 [2024-12-05 12:27:12.806243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.053 [2024-12-05 12:27:12.806247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:42.053 [2024-12-05 12:27:12.806253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:42.053 [2024-12-05 12:27:12.806258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.053 [2024-12-05 12:27:12.806263] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:42.053 [2024-12-05 12:27:12.806269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:42.053 [2024-12-05 12:27:12.806276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:42.053 [2024-12-05 12:27:12.806283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.053 [2024-12-05 12:27:12.806288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:42.053 [2024-12-05 12:27:12.806293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:42.053 [2024-12-05 12:27:12.806298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:42.053 [2024-12-05 12:27:12.806304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:42.053 [2024-12-05 12:27:12.806309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:42.053 [2024-12-05 12:27:12.806314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:42.053 [2024-12-05 12:27:12.806320] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:42.053 [2024-12-05 12:27:12.806327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:42.053 [2024-12-05 12:27:12.806337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:42.053 [2024-12-05 12:27:12.806342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:42.053 [2024-12-05 12:27:12.806347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:42.054 [2024-12-05 12:27:12.806353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:42.054 [2024-12-05 12:27:12.806358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:42.054 [2024-12-05 12:27:12.806363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:42.054 [2024-12-05 12:27:12.806368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:42.054 [2024-12-05 12:27:12.806373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:42.054 [2024-12-05 12:27:12.806379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:42.054 [2024-12-05 12:27:12.806384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:42.054 [2024-12-05 12:27:12.806389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:42.054 [2024-12-05 12:27:12.806394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:42.054 [2024-12-05 12:27:12.806399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:42.054 [2024-12-05 12:27:12.806405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:42.054 [2024-12-05 12:27:12.806410] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:42.054 [2024-12-05 12:27:12.806417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:42.054 [2024-12-05 12:27:12.806423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:42.054 [2024-12-05 12:27:12.806428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:42.054 [2024-12-05 12:27:12.806433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:42.054 [2024-12-05 12:27:12.806439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:42.054 [2024-12-05 12:27:12.806444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.054 [2024-12-05 12:27:12.806450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:42.054 [2024-12-05 12:27:12.806458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:23:42.054 [2024-12-05 12:27:12.806478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.054 [2024-12-05 12:27:12.830929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.054 [2024-12-05 12:27:12.830960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:42.054 [2024-12-05 12:27:12.830969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.406 ms 00:23:42.054 [2024-12-05 12:27:12.830979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.054 [2024-12-05 12:27:12.831041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.054 [2024-12-05 12:27:12.831047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:42.054 [2024-12-05 12:27:12.831054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:23:42.054 [2024-12-05 12:27:12.831060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.054 [2024-12-05 12:27:12.870022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.054 [2024-12-05 12:27:12.870057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:42.054 [2024-12-05 12:27:12.870067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.922 ms 00:23:42.054 [2024-12-05 12:27:12.870074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.054 [2024-12-05 12:27:12.870106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.054 [2024-12-05 12:27:12.870114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:42.054 [2024-12-05 12:27:12.870124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:42.054 [2024-12-05 12:27:12.870130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.054 [2024-12-05 12:27:12.870581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.054 [2024-12-05 12:27:12.870607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:42.054 [2024-12-05 12:27:12.870616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:23:42.054 [2024-12-05 12:27:12.870621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.054 [2024-12-05 12:27:12.870733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.054 [2024-12-05 12:27:12.870740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:42.054 [2024-12-05 12:27:12.870748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:23:42.054 [2024-12-05 12:27:12.870759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.054 [2024-12-05 12:27:12.882671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.054 [2024-12-05 12:27:12.882698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:42.054 [2024-12-05 12:27:12.882708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.896 ms 00:23:42.054 [2024-12-05 12:27:12.882714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.054 [2024-12-05 12:27:12.893001] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:42.054 [2024-12-05 12:27:12.893032] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:42.054 [2024-12-05 12:27:12.893042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.054 [2024-12-05 12:27:12.893049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:42.054 [2024-12-05 12:27:12.893064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.240 ms 00:23:42.054 [2024-12-05 12:27:12.893070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.054 [2024-12-05 12:27:12.911515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.054 [2024-12-05 12:27:12.911556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:42.054 [2024-12-05 12:27:12.911565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.415 ms 00:23:42.054 [2024-12-05 12:27:12.911571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.313 [2024-12-05 12:27:12.920581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.313 [2024-12-05 12:27:12.920608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:42.313 [2024-12-05 12:27:12.920615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.973 ms 00:23:42.313 [2024-12-05 12:27:12.920621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.313 [2024-12-05 12:27:12.929320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.314 [2024-12-05 12:27:12.929346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:42.314 [2024-12-05 12:27:12.929353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.672 ms 00:23:42.314 [2024-12-05 12:27:12.929359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.314 [2024-12-05 12:27:12.929834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.314 [2024-12-05 12:27:12.929853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:42.314 [2024-12-05 12:27:12.929862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:23:42.314 [2024-12-05 12:27:12.929869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.314 [2024-12-05 12:27:12.976760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.314 [2024-12-05 12:27:12.976804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:42.314 [2024-12-05 12:27:12.976819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.875 ms 00:23:42.314 [2024-12-05 12:27:12.976826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.314 [2024-12-05 12:27:12.985205] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:42.314 [2024-12-05 12:27:12.987431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.314 [2024-12-05 12:27:12.987456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:42.314 [2024-12-05 12:27:12.987475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.566 ms 00:23:42.314 [2024-12-05 12:27:12.987482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.314 [2024-12-05 12:27:12.987546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.314 [2024-12-05 12:27:12.987554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:42.314 [2024-12-05 12:27:12.987565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:42.314 [2024-12-05 12:27:12.987570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.314 [2024-12-05 12:27:12.987642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.314 [2024-12-05 12:27:12.987674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:42.314 [2024-12-05 12:27:12.987681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:42.314 [2024-12-05 12:27:12.987688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.314 [2024-12-05 12:27:12.987705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.314 [2024-12-05 12:27:12.987712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:42.314 [2024-12-05 12:27:12.987718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:42.314 [2024-12-05 12:27:12.987724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.314 [2024-12-05 12:27:12.987755] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:42.314 [2024-12-05 12:27:12.987763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.314 [2024-12-05 12:27:12.987769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:42.314 [2024-12-05 12:27:12.987776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:42.314 [2024-12-05 12:27:12.987782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.314 [2024-12-05 12:27:13.005731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.314 [2024-12-05 12:27:13.005770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:42.314 [2024-12-05 12:27:13.005783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.935 ms 00:23:42.314 [2024-12-05 12:27:13.005790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.314 [2024-12-05 12:27:13.005848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.314 [2024-12-05 12:27:13.005856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:42.314 [2024-12-05 12:27:13.005863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:42.314 [2024-12-05 12:27:13.005868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.314 [2024-12-05 12:27:13.007084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.316 ms, result 0 00:23:43.693  [2024-12-05T12:27:15.510Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-05T12:27:16.450Z] Copying: 24/1024 [MB] (10 MBps) [2024-12-05T12:27:17.387Z] Copying: 35/1024 [MB] (11 MBps) [2024-12-05T12:27:18.330Z] Copying: 52/1024 [MB] (17 MBps) [2024-12-05T12:27:19.271Z] Copying: 67/1024 [MB] (15 MBps) [2024-12-05T12:27:20.209Z] Copying: 78/1024 [MB] (11 MBps) [2024-12-05T12:27:21.590Z] Copying: 93/1024 [MB] (15 MBps) [2024-12-05T12:27:22.160Z] Copying: 109/1024 [MB] (16 MBps) [2024-12-05T12:27:23.545Z] Copying: 124/1024 [MB] (14 MBps) [2024-12-05T12:27:24.489Z] Copying: 139/1024 [MB] (15 MBps) [2024-12-05T12:27:25.430Z] Copying: 150/1024 [MB] (10 MBps) [2024-12-05T12:27:26.369Z] Copying: 161/1024 [MB] (10 MBps) [2024-12-05T12:27:27.313Z] Copying: 173/1024 [MB] (11 MBps) [2024-12-05T12:27:28.253Z] Copying: 186/1024 [MB] (13 MBps) [2024-12-05T12:27:29.193Z] Copying: 202/1024 [MB] (15 MBps) [2024-12-05T12:27:30.577Z] Copying: 216/1024 [MB] (14 MBps) [2024-12-05T12:27:31.186Z] Copying: 226/1024 [MB] (10 MBps) [2024-12-05T12:27:32.569Z] Copying: 239/1024 [MB] (12 MBps) [2024-12-05T12:27:33.511Z] Copying: 250/1024 [MB] (11 MBps) [2024-12-05T12:27:34.450Z] Copying: 263/1024 [MB] (12 MBps) [2024-12-05T12:27:35.387Z] Copying: 274/1024 [MB] (10 MBps) [2024-12-05T12:27:36.327Z] Copying: 291/1024 [MB] (16 MBps) [2024-12-05T12:27:37.261Z] Copying: 303/1024 [MB] (11 MBps) [2024-12-05T12:27:38.194Z] Copying: 321/1024 [MB] (18 MBps) [2024-12-05T12:27:39.567Z] Copying: 334/1024 [MB] (13 MBps) [2024-12-05T12:27:40.508Z] Copying: 350/1024 [MB] (15 MBps) [2024-12-05T12:27:41.448Z] Copying: 361/1024 [MB] (11 MBps) [2024-12-05T12:27:42.387Z] Copying: 373/1024 [MB] (11 MBps) [2024-12-05T12:27:43.328Z] Copying: 384/1024 [MB] (11 MBps) [2024-12-05T12:27:44.270Z] Copying: 401/1024 [MB] (16 MBps) [2024-12-05T12:27:45.206Z] Copying: 411/1024 [MB] (10 MBps) [2024-12-05T12:27:46.585Z] Copying: 428/1024 [MB] (16 MBps) [2024-12-05T12:27:47.153Z] Copying: 449/1024 [MB] (21 MBps) [2024-12-05T12:27:48.528Z] Copying: 462/1024 [MB] (13 MBps) [2024-12-05T12:27:49.470Z] Copying: 480/1024 [MB] (18 MBps) [2024-12-05T12:27:50.412Z] Copying: 501/1024 [MB] (21 MBps) [2024-12-05T12:27:51.350Z] Copying: 512/1024 [MB] (10 MBps) [2024-12-05T12:27:52.292Z] Copying: 525/1024 [MB] (13 MBps) [2024-12-05T12:27:53.231Z] Copying: 536/1024 [MB] (10 MBps) [2024-12-05T12:27:54.172Z] Copying: 549/1024 [MB] (13 MBps) [2024-12-05T12:27:55.554Z] Copying: 562/1024 [MB] (12 MBps) [2024-12-05T12:27:56.488Z] Copying: 582/1024 [MB] (19 MBps) [2024-12-05T12:27:57.428Z] Copying: 604/1024 [MB] (22 MBps) [2024-12-05T12:27:58.370Z] Copying: 619/1024 [MB] (15 MBps) [2024-12-05T12:27:59.308Z] Copying: 630/1024 [MB] (11 MBps) [2024-12-05T12:28:00.332Z] Copying: 644/1024 [MB] (13 MBps) [2024-12-05T12:28:01.266Z] Copying: 655/1024 [MB] (10 MBps) [2024-12-05T12:28:02.207Z] Copying: 670/1024 [MB] (14 MBps) [2024-12-05T12:28:03.589Z] Copying: 685/1024 [MB] (14 MBps) [2024-12-05T12:28:04.157Z] Copying: 697/1024 [MB] (12 MBps) [2024-12-05T12:28:05.553Z] Copying: 715/1024 [MB] (18 MBps) [2024-12-05T12:28:06.490Z] Copying: 730/1024 [MB] (14 MBps) [2024-12-05T12:28:07.428Z] Copying: 747/1024 [MB] (17 MBps) [2024-12-05T12:28:08.368Z] Copying: 762/1024 [MB] (14 MBps) [2024-12-05T12:28:09.308Z] Copying: 775/1024 [MB] (13 MBps) [2024-12-05T12:28:10.249Z] Copying: 789/1024 [MB] (14 MBps) [2024-12-05T12:28:11.184Z] Copying: 808/1024 [MB] (18 MBps) [2024-12-05T12:28:12.555Z] Copying: 826/1024 [MB] (18 MBps) [2024-12-05T12:28:13.495Z] Copying: 845/1024 [MB] (18 MBps) [2024-12-05T12:28:14.436Z] Copying: 858/1024 [MB] (13 MBps) [2024-12-05T12:28:15.370Z] Copying: 874/1024 [MB] (15 MBps) [2024-12-05T12:28:16.305Z] Copying: 888/1024 [MB] (14 MBps) [2024-12-05T12:28:17.243Z] Copying: 901/1024 [MB] (12 MBps) [2024-12-05T12:28:18.185Z] Copying: 914/1024 [MB] (12 MBps) [2024-12-05T12:28:19.559Z] Copying: 927/1024 [MB] (13 MBps) [2024-12-05T12:28:20.496Z] Copying: 945/1024 [MB] (17 MBps) [2024-12-05T12:28:21.433Z] Copying: 957/1024 [MB] (12 MBps) [2024-12-05T12:28:22.368Z] Copying: 971/1024 [MB] (13 MBps) [2024-12-05T12:28:23.301Z] Copying: 985/1024 [MB] (14 MBps) [2024-12-05T12:28:24.240Z] Copying: 998/1024 [MB] (12 MBps) [2024-12-05T12:28:25.175Z] Copying: 1011/1024 [MB] (13 MBps) [2024-12-05T12:28:25.175Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-05 12:28:25.031739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.306 [2024-12-05 12:28:25.031817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:54.306 [2024-12-05 12:28:25.031838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:54.306 [2024-12-05 12:28:25.031851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.306 [2024-12-05 12:28:25.031884] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:54.306 [2024-12-05 12:28:25.036602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.306 [2024-12-05 12:28:25.036658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:54.306 [2024-12-05 12:28:25.036674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.694 ms 00:24:54.306 [2024-12-05 12:28:25.036686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.306 [2024-12-05 12:28:25.037034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.306 [2024-12-05 12:28:25.037062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:54.306 [2024-12-05 12:28:25.037075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:24:54.306 [2024-12-05 12:28:25.037088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.306 [2024-12-05 12:28:25.041889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.306 [2024-12-05 12:28:25.041909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:54.306 [2024-12-05 12:28:25.041917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.782 ms 00:24:54.306 [2024-12-05 12:28:25.041927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.306 [2024-12-05 12:28:25.046728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.306 [2024-12-05 12:28:25.046755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:54.306 [2024-12-05 12:28:25.046764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.787 ms 00:24:54.306 [2024-12-05 12:28:25.046770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.306 [2024-12-05 12:28:25.066925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.306 [2024-12-05 12:28:25.066954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:54.306 [2024-12-05 12:28:25.066963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.108 ms 00:24:54.306 [2024-12-05 12:28:25.066969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.306 [2024-12-05 12:28:25.078858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.306 [2024-12-05 12:28:25.078900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:54.306 [2024-12-05 12:28:25.078909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.860 ms 00:24:54.306 [2024-12-05 12:28:25.078916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.306 [2024-12-05 12:28:25.079010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.306 [2024-12-05 12:28:25.079018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:54.306 [2024-12-05 12:28:25.079025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:54.306 [2024-12-05 12:28:25.079031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.306 [2024-12-05 12:28:25.096933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.306 [2024-12-05 12:28:25.096958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:54.306 [2024-12-05 12:28:25.096966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.891 ms 00:24:54.306 [2024-12-05 12:28:25.096971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.306 [2024-12-05 12:28:25.114376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.306 [2024-12-05 12:28:25.114403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:54.306 [2024-12-05 12:28:25.114410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.375 ms 00:24:54.306 [2024-12-05 12:28:25.114416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.306 [2024-12-05 12:28:25.132115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.306 [2024-12-05 12:28:25.132140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:54.306 [2024-12-05 12:28:25.132147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.674 ms 00:24:54.306 [2024-12-05 12:28:25.132153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.306 [2024-12-05 12:28:25.149424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.306 [2024-12-05 12:28:25.149449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:54.306 [2024-12-05 12:28:25.149457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.227 ms 00:24:54.307 [2024-12-05 12:28:25.149473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.307 [2024-12-05 12:28:25.149498] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:54.307 [2024-12-05 12:28:25.149514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:54.307 [2024-12-05 12:28:25.149908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.149997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:54.308 [2024-12-05 12:28:25.150112] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:54.308 [2024-12-05 12:28:25.150118] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 82e30c13-829a-4c2c-aff3-a48d611571c4 00:24:54.308 [2024-12-05 12:28:25.150125] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:54.308 [2024-12-05 12:28:25.150130] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:54.308 [2024-12-05 12:28:25.150135] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:54.308 [2024-12-05 12:28:25.150142] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:54.308 [2024-12-05 12:28:25.150152] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:54.308 [2024-12-05 12:28:25.150158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:54.308 [2024-12-05 12:28:25.150165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:54.308 [2024-12-05 12:28:25.150169] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:54.308 [2024-12-05 12:28:25.150174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:54.308 [2024-12-05 12:28:25.150181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.308 [2024-12-05 12:28:25.150187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:54.308 [2024-12-05 12:28:25.150193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:24:54.308 [2024-12-05 12:28:25.150201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.308 [2024-12-05 12:28:25.160358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.308 [2024-12-05 12:28:25.160383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:54.308 [2024-12-05 12:28:25.160391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.142 ms 00:24:54.308 [2024-12-05 12:28:25.160398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.308 [2024-12-05 12:28:25.160696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.308 [2024-12-05 12:28:25.160710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:54.308 [2024-12-05 12:28:25.160719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:24:54.308 [2024-12-05 12:28:25.160726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.566 [2024-12-05 12:28:25.188227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.566 [2024-12-05 12:28:25.188256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:54.566 [2024-12-05 12:28:25.188265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.566 [2024-12-05 12:28:25.188272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.566 [2024-12-05 12:28:25.188315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.566 [2024-12-05 12:28:25.188323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:54.566 [2024-12-05 12:28:25.188333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.566 [2024-12-05 12:28:25.188339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.566 [2024-12-05 12:28:25.188387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.566 [2024-12-05 12:28:25.188395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:54.566 [2024-12-05 12:28:25.188403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.566 [2024-12-05 12:28:25.188409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.566 [2024-12-05 12:28:25.188421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.566 [2024-12-05 12:28:25.188428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:54.566 [2024-12-05 12:28:25.188435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.566 [2024-12-05 12:28:25.188443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.566 [2024-12-05 12:28:25.250819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.566 [2024-12-05 12:28:25.250855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:54.566 [2024-12-05 12:28:25.250865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.566 [2024-12-05 12:28:25.250872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.566 [2024-12-05 12:28:25.302780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.566 [2024-12-05 12:28:25.302816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:54.566 [2024-12-05 12:28:25.302830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.566 [2024-12-05 12:28:25.302837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.566 [2024-12-05 12:28:25.302883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.566 [2024-12-05 12:28:25.302890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:54.566 [2024-12-05 12:28:25.302897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.566 [2024-12-05 12:28:25.302904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.566 [2024-12-05 12:28:25.302953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.566 [2024-12-05 12:28:25.302961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:54.566 [2024-12-05 12:28:25.302968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.566 [2024-12-05 12:28:25.302974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.566 [2024-12-05 12:28:25.303052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.566 [2024-12-05 12:28:25.303061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:54.566 [2024-12-05 12:28:25.303068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.566 [2024-12-05 12:28:25.303073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.566 [2024-12-05 12:28:25.303099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.566 [2024-12-05 12:28:25.303106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:54.566 [2024-12-05 12:28:25.303113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.566 [2024-12-05 12:28:25.303119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.566 [2024-12-05 12:28:25.303156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.566 [2024-12-05 12:28:25.303164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:54.566 [2024-12-05 12:28:25.303170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.566 [2024-12-05 12:28:25.303176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.566 [2024-12-05 12:28:25.303213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.566 [2024-12-05 12:28:25.303221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:54.566 [2024-12-05 12:28:25.303227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.566 [2024-12-05 12:28:25.303234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.566 [2024-12-05 12:28:25.303340] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 271.589 ms, result 0 00:24:55.132 00:24:55.132 00:24:55.132 12:28:25 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:57.675 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:57.675 12:28:28 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:57.675 [2024-12-05 12:28:28.194559] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:24:57.675 [2024-12-05 12:28:28.194707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79338 ] 00:24:57.675 [2024-12-05 12:28:28.359397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.675 [2024-12-05 12:28:28.473267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.943 [2024-12-05 12:28:28.804382] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:57.944 [2024-12-05 12:28:28.804494] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:58.227 [2024-12-05 12:28:28.969176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.227 [2024-12-05 12:28:28.969251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:58.227 [2024-12-05 12:28:28.969268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:58.228 [2024-12-05 12:28:28.969277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.228 [2024-12-05 12:28:28.969335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.228 [2024-12-05 12:28:28.969350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:58.228 [2024-12-05 12:28:28.969360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:58.228 [2024-12-05 12:28:28.969369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.228 [2024-12-05 12:28:28.969391] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:58.228 [2024-12-05 12:28:28.970808] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:58.228 [2024-12-05 12:28:28.970865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.228 [2024-12-05 12:28:28.970875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:58.228 [2024-12-05 12:28:28.970887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:24:58.228 [2024-12-05 12:28:28.970895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.228 [2024-12-05 12:28:28.973211] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:58.228 [2024-12-05 12:28:28.988557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.228 [2024-12-05 12:28:28.988612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:58.228 [2024-12-05 12:28:28.988627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.348 ms 00:24:58.228 [2024-12-05 12:28:28.988636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.228 [2024-12-05 12:28:28.988722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.228 [2024-12-05 12:28:28.988734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:58.228 [2024-12-05 12:28:28.988743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:58.228 [2024-12-05 12:28:28.988752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.228 [2024-12-05 12:28:29.000106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.228 [2024-12-05 12:28:29.000151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:58.228 [2024-12-05 12:28:29.000163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.277 ms 00:24:58.228 [2024-12-05 12:28:29.000177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.228 [2024-12-05 12:28:29.000265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.228 [2024-12-05 12:28:29.000275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:58.228 [2024-12-05 12:28:29.000286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:58.228 [2024-12-05 12:28:29.000295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.228 [2024-12-05 12:28:29.000355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.228 [2024-12-05 12:28:29.000367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:58.228 [2024-12-05 12:28:29.000376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:58.228 [2024-12-05 12:28:29.000384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.228 [2024-12-05 12:28:29.000413] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:58.228 [2024-12-05 12:28:29.005061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.228 [2024-12-05 12:28:29.005105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:58.228 [2024-12-05 12:28:29.005121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.655 ms 00:24:58.228 [2024-12-05 12:28:29.005130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.228 [2024-12-05 12:28:29.005170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.228 [2024-12-05 12:28:29.005180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:58.228 [2024-12-05 12:28:29.005190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:58.228 [2024-12-05 12:28:29.005198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.228 [2024-12-05 12:28:29.005235] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:58.228 [2024-12-05 12:28:29.005262] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:58.228 [2024-12-05 12:28:29.005304] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:58.228 [2024-12-05 12:28:29.005325] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:58.228 [2024-12-05 12:28:29.005439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:58.228 [2024-12-05 12:28:29.005481] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:58.228 [2024-12-05 12:28:29.005495] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:58.228 [2024-12-05 12:28:29.005507] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:58.228 [2024-12-05 12:28:29.005517] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:58.228 [2024-12-05 12:28:29.005526] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:58.228 [2024-12-05 12:28:29.005535] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:58.228 [2024-12-05 12:28:29.005547] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:58.228 [2024-12-05 12:28:29.005559] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:58.228 [2024-12-05 12:28:29.005569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.228 [2024-12-05 12:28:29.005578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:58.228 [2024-12-05 12:28:29.005586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:24:58.228 [2024-12-05 12:28:29.005593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.228 [2024-12-05 12:28:29.005677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.228 [2024-12-05 12:28:29.005695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:58.228 [2024-12-05 12:28:29.005704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:58.228 [2024-12-05 12:28:29.005715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.228 [2024-12-05 12:28:29.005829] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:58.228 [2024-12-05 12:28:29.005850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:58.228 [2024-12-05 12:28:29.005863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:58.228 [2024-12-05 12:28:29.005873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.228 [2024-12-05 12:28:29.005881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:58.228 [2024-12-05 12:28:29.005889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:58.228 [2024-12-05 12:28:29.005897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:58.228 [2024-12-05 12:28:29.005904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:58.228 [2024-12-05 12:28:29.005914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:58.228 [2024-12-05 12:28:29.005922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:58.228 [2024-12-05 12:28:29.005929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:58.228 [2024-12-05 12:28:29.005936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:58.228 [2024-12-05 12:28:29.005942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:58.228 [2024-12-05 12:28:29.005957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:58.228 [2024-12-05 12:28:29.005966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:58.228 [2024-12-05 12:28:29.005975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.228 [2024-12-05 12:28:29.005984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:58.228 [2024-12-05 12:28:29.005991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:58.228 [2024-12-05 12:28:29.005998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.228 [2024-12-05 12:28:29.006006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:58.228 [2024-12-05 12:28:29.006018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:58.228 [2024-12-05 12:28:29.006026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.228 [2024-12-05 12:28:29.006033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:58.228 [2024-12-05 12:28:29.006040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:58.228 [2024-12-05 12:28:29.006047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.228 [2024-12-05 12:28:29.006054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:58.228 [2024-12-05 12:28:29.006061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:58.228 [2024-12-05 12:28:29.006067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.228 [2024-12-05 12:28:29.006074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:58.228 [2024-12-05 12:28:29.006083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:58.228 [2024-12-05 12:28:29.006089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.228 [2024-12-05 12:28:29.006096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:58.228 [2024-12-05 12:28:29.006102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:58.228 [2024-12-05 12:28:29.006109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:58.228 [2024-12-05 12:28:29.006116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:58.228 [2024-12-05 12:28:29.006123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:58.228 [2024-12-05 12:28:29.006129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:58.228 [2024-12-05 12:28:29.006136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:58.228 [2024-12-05 12:28:29.006142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:58.228 [2024-12-05 12:28:29.006149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.228 [2024-12-05 12:28:29.006158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:58.228 [2024-12-05 12:28:29.006166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:58.228 [2024-12-05 12:28:29.006173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.228 [2024-12-05 12:28:29.006180] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:58.229 [2024-12-05 12:28:29.006189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:58.229 [2024-12-05 12:28:29.006197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:58.229 [2024-12-05 12:28:29.006204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.229 [2024-12-05 12:28:29.006215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:58.229 [2024-12-05 12:28:29.006226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:58.229 [2024-12-05 12:28:29.006234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:58.229 [2024-12-05 12:28:29.006241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:58.229 [2024-12-05 12:28:29.006248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:58.229 [2024-12-05 12:28:29.006257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:58.229 [2024-12-05 12:28:29.006267] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:58.229 [2024-12-05 12:28:29.006277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:58.229 [2024-12-05 12:28:29.006288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:58.229 [2024-12-05 12:28:29.006295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:58.229 [2024-12-05 12:28:29.006303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:58.229 [2024-12-05 12:28:29.006310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:58.229 [2024-12-05 12:28:29.006317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:58.229 [2024-12-05 12:28:29.006326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:58.229 [2024-12-05 12:28:29.006333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:58.229 [2024-12-05 12:28:29.006340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:58.229 [2024-12-05 12:28:29.006347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:58.229 [2024-12-05 12:28:29.006353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:58.229 [2024-12-05 12:28:29.006362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:58.229 [2024-12-05 12:28:29.006371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:58.229 [2024-12-05 12:28:29.006378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:58.229 [2024-12-05 12:28:29.006385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:58.229 [2024-12-05 12:28:29.006393] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:58.229 [2024-12-05 12:28:29.006401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:58.229 [2024-12-05 12:28:29.006410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:58.229 [2024-12-05 12:28:29.006418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:58.229 [2024-12-05 12:28:29.006425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:58.229 [2024-12-05 12:28:29.006433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:58.229 [2024-12-05 12:28:29.006440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.229 [2024-12-05 12:28:29.006448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:58.229 [2024-12-05 12:28:29.006457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:24:58.229 [2024-12-05 12:28:29.006480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.229 [2024-12-05 12:28:29.044594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.229 [2024-12-05 12:28:29.044632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:58.229 [2024-12-05 12:28:29.044646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.065 ms 00:24:58.229 [2024-12-05 12:28:29.044660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.229 [2024-12-05 12:28:29.044759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.229 [2024-12-05 12:28:29.044769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:58.229 [2024-12-05 12:28:29.044779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:58.229 [2024-12-05 12:28:29.044787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.494 [2024-12-05 12:28:29.098189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.494 [2024-12-05 12:28:29.098244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:58.494 [2024-12-05 12:28:29.098259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.337 ms 00:24:58.494 [2024-12-05 12:28:29.098269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.494 [2024-12-05 12:28:29.098325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.494 [2024-12-05 12:28:29.098337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:58.494 [2024-12-05 12:28:29.098352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:58.494 [2024-12-05 12:28:29.098360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.494 [2024-12-05 12:28:29.099122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.494 [2024-12-05 12:28:29.099170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:58.494 [2024-12-05 12:28:29.099183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:24:58.494 [2024-12-05 12:28:29.099191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.494 [2024-12-05 12:28:29.099366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.494 [2024-12-05 12:28:29.099378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:58.495 [2024-12-05 12:28:29.099394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:24:58.495 [2024-12-05 12:28:29.099403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.117478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.495 [2024-12-05 12:28:29.117525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:58.495 [2024-12-05 12:28:29.117536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.036 ms 00:24:58.495 [2024-12-05 12:28:29.117545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.132935] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:58.495 [2024-12-05 12:28:29.133032] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:58.495 [2024-12-05 12:28:29.133048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.495 [2024-12-05 12:28:29.133059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:58.495 [2024-12-05 12:28:29.133070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.389 ms 00:24:58.495 [2024-12-05 12:28:29.133078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.159191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.495 [2024-12-05 12:28:29.159241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:58.495 [2024-12-05 12:28:29.159256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.056 ms 00:24:58.495 [2024-12-05 12:28:29.159265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.172281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.495 [2024-12-05 12:28:29.172328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:58.495 [2024-12-05 12:28:29.172340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.950 ms 00:24:58.495 [2024-12-05 12:28:29.172348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.184605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.495 [2024-12-05 12:28:29.184653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:58.495 [2024-12-05 12:28:29.184665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.210 ms 00:24:58.495 [2024-12-05 12:28:29.184674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.185449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.495 [2024-12-05 12:28:29.185513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:58.495 [2024-12-05 12:28:29.185530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:24:58.495 [2024-12-05 12:28:29.185539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.256839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.495 [2024-12-05 12:28:29.256907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:58.495 [2024-12-05 12:28:29.256930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.276 ms 00:24:58.495 [2024-12-05 12:28:29.256940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.268761] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:58.495 [2024-12-05 12:28:29.272413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.495 [2024-12-05 12:28:29.272457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:58.495 [2024-12-05 12:28:29.272489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.407 ms 00:24:58.495 [2024-12-05 12:28:29.272498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.272593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.495 [2024-12-05 12:28:29.272605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:58.495 [2024-12-05 12:28:29.272619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:58.495 [2024-12-05 12:28:29.272629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.272718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.495 [2024-12-05 12:28:29.272730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:58.495 [2024-12-05 12:28:29.272739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:58.495 [2024-12-05 12:28:29.272750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.272774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.495 [2024-12-05 12:28:29.272785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:58.495 [2024-12-05 12:28:29.272794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:58.495 [2024-12-05 12:28:29.272802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.272848] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:58.495 [2024-12-05 12:28:29.272861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.495 [2024-12-05 12:28:29.272869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:58.495 [2024-12-05 12:28:29.272878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:58.495 [2024-12-05 12:28:29.272889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.298820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.495 [2024-12-05 12:28:29.298871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:58.495 [2024-12-05 12:28:29.298891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.913 ms 00:24:58.495 [2024-12-05 12:28:29.298902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.298996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.495 [2024-12-05 12:28:29.299008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:58.495 [2024-12-05 12:28:29.299018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:58.495 [2024-12-05 12:28:29.299027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.495 [2024-12-05 12:28:29.302352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.829 ms, result 0 00:24:59.873  [2024-12-05T12:28:31.674Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-05T12:28:32.608Z] Copying: 27/1024 [MB] (16 MBps) [2024-12-05T12:28:33.543Z] Copying: 43/1024 [MB] (16 MBps) [2024-12-05T12:28:34.473Z] Copying: 57/1024 [MB] (13 MBps) [2024-12-05T12:28:35.409Z] Copying: 80/1024 [MB] (22 MBps) [2024-12-05T12:28:36.341Z] Copying: 94/1024 [MB] (14 MBps) [2024-12-05T12:28:37.718Z] Copying: 108/1024 [MB] (14 MBps) [2024-12-05T12:28:38.654Z] Copying: 121/1024 [MB] (12 MBps) [2024-12-05T12:28:39.590Z] Copying: 133/1024 [MB] (11 MBps) [2024-12-05T12:28:40.533Z] Copying: 150/1024 [MB] (16 MBps) [2024-12-05T12:28:41.473Z] Copying: 162/1024 [MB] (11 MBps) [2024-12-05T12:28:42.406Z] Copying: 175/1024 [MB] (13 MBps) [2024-12-05T12:28:43.338Z] Copying: 198/1024 [MB] (23 MBps) [2024-12-05T12:28:44.720Z] Copying: 219/1024 [MB] (20 MBps) [2024-12-05T12:28:45.660Z] Copying: 235/1024 [MB] (16 MBps) [2024-12-05T12:28:46.598Z] Copying: 248/1024 [MB] (12 MBps) [2024-12-05T12:28:47.535Z] Copying: 258/1024 [MB] (10 MBps) [2024-12-05T12:28:48.477Z] Copying: 270/1024 [MB] (11 MBps) [2024-12-05T12:28:49.419Z] Copying: 282/1024 [MB] (12 MBps) [2024-12-05T12:28:50.363Z] Copying: 292/1024 [MB] (10 MBps) [2024-12-05T12:28:51.738Z] Copying: 302/1024 [MB] (10 MBps) [2024-12-05T12:28:52.679Z] Copying: 315/1024 [MB] (13 MBps) [2024-12-05T12:28:53.618Z] Copying: 332/1024 [MB] (16 MBps) [2024-12-05T12:28:54.557Z] Copying: 350280/1048576 [kB] (10200 kBps) [2024-12-05T12:28:55.495Z] Copying: 353/1024 [MB] (11 MBps) [2024-12-05T12:28:56.428Z] Copying: 366/1024 [MB] (12 MBps) [2024-12-05T12:28:57.431Z] Copying: 379/1024 [MB] (13 MBps) [2024-12-05T12:28:58.366Z] Copying: 393/1024 [MB] (14 MBps) [2024-12-05T12:28:59.742Z] Copying: 410/1024 [MB] (16 MBps) [2024-12-05T12:29:00.678Z] Copying: 425/1024 [MB] (14 MBps) [2024-12-05T12:29:01.615Z] Copying: 440/1024 [MB] (15 MBps) [2024-12-05T12:29:02.560Z] Copying: 455/1024 [MB] (14 MBps) [2024-12-05T12:29:03.500Z] Copying: 467/1024 [MB] (11 MBps) [2024-12-05T12:29:04.433Z] Copying: 477/1024 [MB] (10 MBps) [2024-12-05T12:29:05.388Z] Copying: 492/1024 [MB] (15 MBps) [2024-12-05T12:29:06.328Z] Copying: 507/1024 [MB] (14 MBps) [2024-12-05T12:29:07.708Z] Copying: 518/1024 [MB] (10 MBps) [2024-12-05T12:29:08.648Z] Copying: 531/1024 [MB] (12 MBps) [2024-12-05T12:29:09.587Z] Copying: 546/1024 [MB] (15 MBps) [2024-12-05T12:29:10.521Z] Copying: 556/1024 [MB] (10 MBps) [2024-12-05T12:29:11.459Z] Copying: 579/1024 [MB] (23 MBps) [2024-12-05T12:29:12.397Z] Copying: 600/1024 [MB] (20 MBps) [2024-12-05T12:29:13.331Z] Copying: 612/1024 [MB] (12 MBps) [2024-12-05T12:29:14.706Z] Copying: 629/1024 [MB] (16 MBps) [2024-12-05T12:29:15.636Z] Copying: 648/1024 [MB] (18 MBps) [2024-12-05T12:29:16.569Z] Copying: 664/1024 [MB] (16 MBps) [2024-12-05T12:29:17.499Z] Copying: 690/1024 [MB] (25 MBps) [2024-12-05T12:29:18.429Z] Copying: 716/1024 [MB] (25 MBps) [2024-12-05T12:29:19.362Z] Copying: 739/1024 [MB] (22 MBps) [2024-12-05T12:29:20.732Z] Copying: 755/1024 [MB] (16 MBps) [2024-12-05T12:29:21.412Z] Copying: 771/1024 [MB] (16 MBps) [2024-12-05T12:29:22.360Z] Copying: 787/1024 [MB] (15 MBps) [2024-12-05T12:29:23.733Z] Copying: 803/1024 [MB] (15 MBps) [2024-12-05T12:29:24.665Z] Copying: 819/1024 [MB] (15 MBps) [2024-12-05T12:29:25.596Z] Copying: 836/1024 [MB] (16 MBps) [2024-12-05T12:29:26.529Z] Copying: 852/1024 [MB] (16 MBps) [2024-12-05T12:29:27.471Z] Copying: 873/1024 [MB] (21 MBps) [2024-12-05T12:29:28.408Z] Copying: 885/1024 [MB] (11 MBps) [2024-12-05T12:29:29.340Z] Copying: 898/1024 [MB] (13 MBps) [2024-12-05T12:29:30.717Z] Copying: 915/1024 [MB] (16 MBps) [2024-12-05T12:29:31.650Z] Copying: 930/1024 [MB] (14 MBps) [2024-12-05T12:29:32.582Z] Copying: 943/1024 [MB] (13 MBps) [2024-12-05T12:29:33.513Z] Copying: 960/1024 [MB] (17 MBps) [2024-12-05T12:29:34.446Z] Copying: 977/1024 [MB] (16 MBps) [2024-12-05T12:29:35.380Z] Copying: 994/1024 [MB] (17 MBps) [2024-12-05T12:29:36.321Z] Copying: 1011/1024 [MB] (16 MBps) [2024-12-05T12:29:37.727Z] Copying: 1021/1024 [MB] (10 MBps) [2024-12-05T12:29:37.727Z] Copying: 1048484/1048576 [kB] (2524 kBps) [2024-12-05T12:29:37.727Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-05 12:29:37.406938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.858 [2024-12-05 12:29:37.407040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:06.858 [2024-12-05 12:29:37.407076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:06.858 [2024-12-05 12:29:37.407087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.858 [2024-12-05 12:29:37.408017] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:06.858 [2024-12-05 12:29:37.411886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.858 [2024-12-05 12:29:37.411939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:06.858 [2024-12-05 12:29:37.411952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.834 ms 00:26:06.858 [2024-12-05 12:29:37.411961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.858 [2024-12-05 12:29:37.424880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.858 [2024-12-05 12:29:37.424955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:06.858 [2024-12-05 12:29:37.424970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.644 ms 00:26:06.858 [2024-12-05 12:29:37.424989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.858 [2024-12-05 12:29:37.449589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.858 [2024-12-05 12:29:37.449643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:06.858 [2024-12-05 12:29:37.449657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.580 ms 00:26:06.858 [2024-12-05 12:29:37.449666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.858 [2024-12-05 12:29:37.455868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.858 [2024-12-05 12:29:37.455913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:06.858 [2024-12-05 12:29:37.455925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.163 ms 00:26:06.858 [2024-12-05 12:29:37.455943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.858 [2024-12-05 12:29:37.484163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.858 [2024-12-05 12:29:37.484220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:06.858 [2024-12-05 12:29:37.484234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.170 ms 00:26:06.858 [2024-12-05 12:29:37.484243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.858 [2024-12-05 12:29:37.501607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.858 [2024-12-05 12:29:37.501659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:06.858 [2024-12-05 12:29:37.501673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.310 ms 00:26:06.858 [2024-12-05 12:29:37.501682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.118 [2024-12-05 12:29:37.767301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.118 [2024-12-05 12:29:37.767373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:07.118 [2024-12-05 12:29:37.767391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 265.562 ms 00:26:07.118 [2024-12-05 12:29:37.767402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.118 [2024-12-05 12:29:37.794056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.118 [2024-12-05 12:29:37.794111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:07.118 [2024-12-05 12:29:37.794125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.634 ms 00:26:07.118 [2024-12-05 12:29:37.794134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.118 [2024-12-05 12:29:37.820005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.118 [2024-12-05 12:29:37.820055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:07.118 [2024-12-05 12:29:37.820068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.823 ms 00:26:07.118 [2024-12-05 12:29:37.820077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.118 [2024-12-05 12:29:37.845011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.119 [2024-12-05 12:29:37.845062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:07.119 [2024-12-05 12:29:37.845074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.885 ms 00:26:07.119 [2024-12-05 12:29:37.845083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.119 [2024-12-05 12:29:37.870378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.119 [2024-12-05 12:29:37.870425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:07.119 [2024-12-05 12:29:37.870438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.219 ms 00:26:07.119 [2024-12-05 12:29:37.870446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.119 [2024-12-05 12:29:37.870508] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:07.119 [2024-12-05 12:29:37.870527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 96512 / 261120 wr_cnt: 1 state: open 00:26:07.119 [2024-12-05 12:29:37.870539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.870994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:07.119 [2024-12-05 12:29:37.871182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:07.120 [2024-12-05 12:29:37.871344] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:07.120 [2024-12-05 12:29:37.871354] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 82e30c13-829a-4c2c-aff3-a48d611571c4 00:26:07.120 [2024-12-05 12:29:37.871363] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 96512 00:26:07.120 [2024-12-05 12:29:37.871372] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 97472 00:26:07.120 [2024-12-05 12:29:37.871380] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 96512 00:26:07.120 [2024-12-05 12:29:37.871390] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0099 00:26:07.120 [2024-12-05 12:29:37.871414] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:07.120 [2024-12-05 12:29:37.871422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:07.120 [2024-12-05 12:29:37.871446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:07.120 [2024-12-05 12:29:37.871453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:07.120 [2024-12-05 12:29:37.871472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:07.120 [2024-12-05 12:29:37.871481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.120 [2024-12-05 12:29:37.871489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:07.120 [2024-12-05 12:29:37.871499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:26:07.120 [2024-12-05 12:29:37.871508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.120 [2024-12-05 12:29:37.886065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.120 [2024-12-05 12:29:37.886111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:07.120 [2024-12-05 12:29:37.886131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.521 ms 00:26:07.120 [2024-12-05 12:29:37.886140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.120 [2024-12-05 12:29:37.886591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.120 [2024-12-05 12:29:37.886620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:07.120 [2024-12-05 12:29:37.886631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:26:07.120 [2024-12-05 12:29:37.886641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.120 [2024-12-05 12:29:37.926230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.120 [2024-12-05 12:29:37.926283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:07.120 [2024-12-05 12:29:37.926297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.120 [2024-12-05 12:29:37.926307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.120 [2024-12-05 12:29:37.926380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.120 [2024-12-05 12:29:37.926391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:07.120 [2024-12-05 12:29:37.926401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.120 [2024-12-05 12:29:37.926411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.120 [2024-12-05 12:29:37.926517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.120 [2024-12-05 12:29:37.926538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:07.120 [2024-12-05 12:29:37.926548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.120 [2024-12-05 12:29:37.926557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.120 [2024-12-05 12:29:37.926574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.120 [2024-12-05 12:29:37.926583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:07.120 [2024-12-05 12:29:37.926591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.120 [2024-12-05 12:29:37.926600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.380 [2024-12-05 12:29:38.004486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.380 [2024-12-05 12:29:38.004540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:07.380 [2024-12-05 12:29:38.004552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.380 [2024-12-05 12:29:38.004560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.380 [2024-12-05 12:29:38.059570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.380 [2024-12-05 12:29:38.059609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:07.380 [2024-12-05 12:29:38.059619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.380 [2024-12-05 12:29:38.059626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.380 [2024-12-05 12:29:38.059681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.380 [2024-12-05 12:29:38.059689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:07.380 [2024-12-05 12:29:38.059696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.380 [2024-12-05 12:29:38.059706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.380 [2024-12-05 12:29:38.059753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.380 [2024-12-05 12:29:38.059762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:07.380 [2024-12-05 12:29:38.059768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.380 [2024-12-05 12:29:38.059775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.380 [2024-12-05 12:29:38.060003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.380 [2024-12-05 12:29:38.060014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:07.380 [2024-12-05 12:29:38.060021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.381 [2024-12-05 12:29:38.060030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.381 [2024-12-05 12:29:38.060055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.381 [2024-12-05 12:29:38.060064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:07.381 [2024-12-05 12:29:38.060071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.381 [2024-12-05 12:29:38.060077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.381 [2024-12-05 12:29:38.060114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.381 [2024-12-05 12:29:38.060122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:07.381 [2024-12-05 12:29:38.060129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.381 [2024-12-05 12:29:38.060136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.381 [2024-12-05 12:29:38.060180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.381 [2024-12-05 12:29:38.060189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:07.381 [2024-12-05 12:29:38.060196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.381 [2024-12-05 12:29:38.060202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.381 [2024-12-05 12:29:38.060317] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 656.549 ms, result 0 00:26:08.320 00:26:08.320 00:26:08.579 12:29:39 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:08.579 [2024-12-05 12:29:39.254508] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:26:08.579 [2024-12-05 12:29:39.254640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80060 ] 00:26:08.579 [2024-12-05 12:29:39.413971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.837 [2024-12-05 12:29:39.505587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.096 [2024-12-05 12:29:39.739543] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:09.096 [2024-12-05 12:29:39.739599] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:09.096 [2024-12-05 12:29:39.895455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.096 [2024-12-05 12:29:39.895507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:09.096 [2024-12-05 12:29:39.895520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:09.096 [2024-12-05 12:29:39.895527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.096 [2024-12-05 12:29:39.895568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.096 [2024-12-05 12:29:39.895578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:09.096 [2024-12-05 12:29:39.895586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:09.096 [2024-12-05 12:29:39.895593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.096 [2024-12-05 12:29:39.895606] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:09.096 [2024-12-05 12:29:39.896160] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:09.096 [2024-12-05 12:29:39.896180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.096 [2024-12-05 12:29:39.896186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:09.096 [2024-12-05 12:29:39.896193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:26:09.096 [2024-12-05 12:29:39.896199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.096 [2024-12-05 12:29:39.897500] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:09.096 [2024-12-05 12:29:39.907772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.096 [2024-12-05 12:29:39.907801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:09.096 [2024-12-05 12:29:39.907812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.272 ms 00:26:09.096 [2024-12-05 12:29:39.907818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.096 [2024-12-05 12:29:39.907865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.096 [2024-12-05 12:29:39.907872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:09.096 [2024-12-05 12:29:39.907879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:09.096 [2024-12-05 12:29:39.907885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.096 [2024-12-05 12:29:39.914235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.096 [2024-12-05 12:29:39.914261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:09.096 [2024-12-05 12:29:39.914269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.310 ms 00:26:09.096 [2024-12-05 12:29:39.914278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.096 [2024-12-05 12:29:39.914336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.096 [2024-12-05 12:29:39.914343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:09.096 [2024-12-05 12:29:39.914350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:09.096 [2024-12-05 12:29:39.914355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.096 [2024-12-05 12:29:39.914393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.096 [2024-12-05 12:29:39.914401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:09.096 [2024-12-05 12:29:39.914408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:09.096 [2024-12-05 12:29:39.914414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.096 [2024-12-05 12:29:39.914431] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:09.096 [2024-12-05 12:29:39.917431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.096 [2024-12-05 12:29:39.917456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:09.096 [2024-12-05 12:29:39.917476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:26:09.096 [2024-12-05 12:29:39.917483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.096 [2024-12-05 12:29:39.917511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.096 [2024-12-05 12:29:39.917518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:09.096 [2024-12-05 12:29:39.917525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:09.096 [2024-12-05 12:29:39.917531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.096 [2024-12-05 12:29:39.917545] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:09.096 [2024-12-05 12:29:39.917562] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:09.096 [2024-12-05 12:29:39.917591] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:09.096 [2024-12-05 12:29:39.917607] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:09.096 [2024-12-05 12:29:39.917690] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:09.096 [2024-12-05 12:29:39.917699] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:09.096 [2024-12-05 12:29:39.917708] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:09.096 [2024-12-05 12:29:39.917716] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:09.096 [2024-12-05 12:29:39.917724] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:09.096 [2024-12-05 12:29:39.917730] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:09.096 [2024-12-05 12:29:39.917737] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:09.096 [2024-12-05 12:29:39.917744] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:09.096 [2024-12-05 12:29:39.917751] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:09.096 [2024-12-05 12:29:39.917757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.096 [2024-12-05 12:29:39.917763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:09.096 [2024-12-05 12:29:39.917770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:26:09.096 [2024-12-05 12:29:39.917775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.096 [2024-12-05 12:29:39.917838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.096 [2024-12-05 12:29:39.917852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:09.096 [2024-12-05 12:29:39.917858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:09.096 [2024-12-05 12:29:39.917864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.096 [2024-12-05 12:29:39.917943] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:09.096 [2024-12-05 12:29:39.917951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:09.096 [2024-12-05 12:29:39.917958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:09.096 [2024-12-05 12:29:39.917963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.096 [2024-12-05 12:29:39.917970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:09.096 [2024-12-05 12:29:39.917977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:09.096 [2024-12-05 12:29:39.917983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:09.097 [2024-12-05 12:29:39.917989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:09.097 [2024-12-05 12:29:39.917994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:09.097 [2024-12-05 12:29:39.918002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:09.097 [2024-12-05 12:29:39.918008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:09.097 [2024-12-05 12:29:39.918013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:09.097 [2024-12-05 12:29:39.918019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:09.097 [2024-12-05 12:29:39.918030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:09.097 [2024-12-05 12:29:39.918036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:09.097 [2024-12-05 12:29:39.918041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.097 [2024-12-05 12:29:39.918046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:09.097 [2024-12-05 12:29:39.918051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:09.097 [2024-12-05 12:29:39.918057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.097 [2024-12-05 12:29:39.918062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:09.097 [2024-12-05 12:29:39.918068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:09.097 [2024-12-05 12:29:39.918073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:09.097 [2024-12-05 12:29:39.918078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:09.097 [2024-12-05 12:29:39.918084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:09.097 [2024-12-05 12:29:39.918089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:09.097 [2024-12-05 12:29:39.918095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:09.097 [2024-12-05 12:29:39.918100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:09.097 [2024-12-05 12:29:39.918106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:09.097 [2024-12-05 12:29:39.918111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:09.097 [2024-12-05 12:29:39.918116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:09.097 [2024-12-05 12:29:39.918122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:09.097 [2024-12-05 12:29:39.918127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:09.097 [2024-12-05 12:29:39.918132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:09.097 [2024-12-05 12:29:39.918137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:09.097 [2024-12-05 12:29:39.918142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:09.097 [2024-12-05 12:29:39.918147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:09.097 [2024-12-05 12:29:39.918152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:09.097 [2024-12-05 12:29:39.918157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:09.097 [2024-12-05 12:29:39.918162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:09.097 [2024-12-05 12:29:39.918167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.097 [2024-12-05 12:29:39.918173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:09.097 [2024-12-05 12:29:39.918179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:09.097 [2024-12-05 12:29:39.918184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.097 [2024-12-05 12:29:39.918189] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:09.097 [2024-12-05 12:29:39.918195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:09.097 [2024-12-05 12:29:39.918201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:09.097 [2024-12-05 12:29:39.918207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:09.097 [2024-12-05 12:29:39.918213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:09.097 [2024-12-05 12:29:39.918219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:09.097 [2024-12-05 12:29:39.918224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:09.097 [2024-12-05 12:29:39.918229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:09.097 [2024-12-05 12:29:39.918234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:09.097 [2024-12-05 12:29:39.918239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:09.097 [2024-12-05 12:29:39.918245] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:09.097 [2024-12-05 12:29:39.918253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:09.097 [2024-12-05 12:29:39.918261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:09.097 [2024-12-05 12:29:39.918266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:09.097 [2024-12-05 12:29:39.918272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:09.097 [2024-12-05 12:29:39.918277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:09.097 [2024-12-05 12:29:39.918284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:09.097 [2024-12-05 12:29:39.918289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:09.097 [2024-12-05 12:29:39.918295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:09.097 [2024-12-05 12:29:39.918301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:09.097 [2024-12-05 12:29:39.918306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:09.097 [2024-12-05 12:29:39.918311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:09.097 [2024-12-05 12:29:39.918317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:09.097 [2024-12-05 12:29:39.918322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:09.097 [2024-12-05 12:29:39.918327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:09.097 [2024-12-05 12:29:39.918333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:09.097 [2024-12-05 12:29:39.918339] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:09.097 [2024-12-05 12:29:39.918345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:09.097 [2024-12-05 12:29:39.918351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:09.097 [2024-12-05 12:29:39.918357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:09.097 [2024-12-05 12:29:39.918364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:09.097 [2024-12-05 12:29:39.918370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:09.097 [2024-12-05 12:29:39.918376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.097 [2024-12-05 12:29:39.918382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:09.097 [2024-12-05 12:29:39.918388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:26:09.097 [2024-12-05 12:29:39.918394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.098 [2024-12-05 12:29:39.942864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.098 [2024-12-05 12:29:39.942893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:09.098 [2024-12-05 12:29:39.942901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.426 ms 00:26:09.098 [2024-12-05 12:29:39.942911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.098 [2024-12-05 12:29:39.942974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.098 [2024-12-05 12:29:39.942981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:09.098 [2024-12-05 12:29:39.942987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:09.098 [2024-12-05 12:29:39.942993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.356 [2024-12-05 12:29:39.980043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.356 [2024-12-05 12:29:39.980076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:09.356 [2024-12-05 12:29:39.980086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.010 ms 00:26:09.356 [2024-12-05 12:29:39.980092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.356 [2024-12-05 12:29:39.980125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.356 [2024-12-05 12:29:39.980134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:09.356 [2024-12-05 12:29:39.980143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:09.356 [2024-12-05 12:29:39.980150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.356 [2024-12-05 12:29:39.980585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.356 [2024-12-05 12:29:39.980605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:09.356 [2024-12-05 12:29:39.980613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:26:09.356 [2024-12-05 12:29:39.980619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.356 [2024-12-05 12:29:39.980728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.356 [2024-12-05 12:29:39.980737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:09.356 [2024-12-05 12:29:39.980744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:26:09.356 [2024-12-05 12:29:39.980754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.356 [2024-12-05 12:29:39.992675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.356 [2024-12-05 12:29:39.992702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:09.356 [2024-12-05 12:29:39.992712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.905 ms 00:26:09.356 [2024-12-05 12:29:39.992719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.356 [2024-12-05 12:29:40.003400] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:09.356 [2024-12-05 12:29:40.003431] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:09.356 [2024-12-05 12:29:40.003441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.356 [2024-12-05 12:29:40.003447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:09.356 [2024-12-05 12:29:40.003455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.633 ms 00:26:09.356 [2024-12-05 12:29:40.003468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.356 [2024-12-05 12:29:40.022674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.356 [2024-12-05 12:29:40.022702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:09.356 [2024-12-05 12:29:40.022711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.173 ms 00:26:09.356 [2024-12-05 12:29:40.022718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.356 [2024-12-05 12:29:40.032033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.356 [2024-12-05 12:29:40.032059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:09.356 [2024-12-05 12:29:40.032067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.279 ms 00:26:09.357 [2024-12-05 12:29:40.032073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.357 [2024-12-05 12:29:40.041186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.357 [2024-12-05 12:29:40.041214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:09.357 [2024-12-05 12:29:40.041221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.085 ms 00:26:09.357 [2024-12-05 12:29:40.041227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.357 [2024-12-05 12:29:40.041715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.357 [2024-12-05 12:29:40.041734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:09.357 [2024-12-05 12:29:40.041744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:26:09.357 [2024-12-05 12:29:40.041751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.357 [2024-12-05 12:29:40.089908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.357 [2024-12-05 12:29:40.089945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:09.357 [2024-12-05 12:29:40.089960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.142 ms 00:26:09.357 [2024-12-05 12:29:40.089968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.357 [2024-12-05 12:29:40.097879] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:09.357 [2024-12-05 12:29:40.099948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.357 [2024-12-05 12:29:40.099974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:09.357 [2024-12-05 12:29:40.099984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.945 ms 00:26:09.357 [2024-12-05 12:29:40.099992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.357 [2024-12-05 12:29:40.100050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.357 [2024-12-05 12:29:40.100059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:09.357 [2024-12-05 12:29:40.100069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:09.357 [2024-12-05 12:29:40.100076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.357 [2024-12-05 12:29:40.101362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.357 [2024-12-05 12:29:40.101390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:09.357 [2024-12-05 12:29:40.101398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:26:09.357 [2024-12-05 12:29:40.101405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.357 [2024-12-05 12:29:40.101427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.357 [2024-12-05 12:29:40.101434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:09.357 [2024-12-05 12:29:40.101441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:09.357 [2024-12-05 12:29:40.101448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.357 [2024-12-05 12:29:40.101498] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:09.357 [2024-12-05 12:29:40.101506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.357 [2024-12-05 12:29:40.101514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:09.357 [2024-12-05 12:29:40.101520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:09.357 [2024-12-05 12:29:40.101526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.357 [2024-12-05 12:29:40.120469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.357 [2024-12-05 12:29:40.120498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:09.357 [2024-12-05 12:29:40.120510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.929 ms 00:26:09.357 [2024-12-05 12:29:40.120517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.357 [2024-12-05 12:29:40.120577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.357 [2024-12-05 12:29:40.120585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:09.357 [2024-12-05 12:29:40.120592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:09.357 [2024-12-05 12:29:40.120598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.357 [2024-12-05 12:29:40.121880] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 225.996 ms, result 0 00:26:10.740  [2024-12-05T12:29:42.548Z] Copying: 7808/1048576 [kB] (7808 kBps) [2024-12-05T12:29:43.490Z] Copying: 19/1024 [MB] (11 MBps) [2024-12-05T12:29:44.431Z] Copying: 37/1024 [MB] (17 MBps) [2024-12-05T12:29:45.374Z] Copying: 52/1024 [MB] (15 MBps) [2024-12-05T12:29:46.314Z] Copying: 63/1024 [MB] (10 MBps) [2024-12-05T12:29:47.696Z] Copying: 80/1024 [MB] (17 MBps) [2024-12-05T12:29:48.268Z] Copying: 100/1024 [MB] (19 MBps) [2024-12-05T12:29:49.653Z] Copying: 117/1024 [MB] (17 MBps) [2024-12-05T12:29:50.598Z] Copying: 137/1024 [MB] (20 MBps) [2024-12-05T12:29:51.539Z] Copying: 157/1024 [MB] (19 MBps) [2024-12-05T12:29:52.484Z] Copying: 176/1024 [MB] (19 MBps) [2024-12-05T12:29:53.427Z] Copying: 189/1024 [MB] (13 MBps) [2024-12-05T12:29:54.369Z] Copying: 200/1024 [MB] (10 MBps) [2024-12-05T12:29:55.315Z] Copying: 214/1024 [MB] (13 MBps) [2024-12-05T12:29:56.694Z] Copying: 232/1024 [MB] (18 MBps) [2024-12-05T12:29:57.629Z] Copying: 246/1024 [MB] (13 MBps) [2024-12-05T12:29:58.567Z] Copying: 259/1024 [MB] (13 MBps) [2024-12-05T12:29:59.503Z] Copying: 270/1024 [MB] (11 MBps) [2024-12-05T12:30:00.499Z] Copying: 285/1024 [MB] (14 MBps) [2024-12-05T12:30:01.436Z] Copying: 302/1024 [MB] (17 MBps) [2024-12-05T12:30:02.369Z] Copying: 316/1024 [MB] (13 MBps) [2024-12-05T12:30:03.310Z] Copying: 334/1024 [MB] (18 MBps) [2024-12-05T12:30:04.686Z] Copying: 346/1024 [MB] (11 MBps) [2024-12-05T12:30:05.624Z] Copying: 362/1024 [MB] (15 MBps) [2024-12-05T12:30:06.560Z] Copying: 375/1024 [MB] (13 MBps) [2024-12-05T12:30:07.498Z] Copying: 391/1024 [MB] (15 MBps) [2024-12-05T12:30:08.432Z] Copying: 403/1024 [MB] (12 MBps) [2024-12-05T12:30:09.365Z] Copying: 417/1024 [MB] (14 MBps) [2024-12-05T12:30:10.297Z] Copying: 432/1024 [MB] (14 MBps) [2024-12-05T12:30:11.678Z] Copying: 450/1024 [MB] (17 MBps) [2024-12-05T12:30:12.613Z] Copying: 462/1024 [MB] (12 MBps) [2024-12-05T12:30:13.549Z] Copying: 475/1024 [MB] (12 MBps) [2024-12-05T12:30:14.485Z] Copying: 489/1024 [MB] (13 MBps) [2024-12-05T12:30:15.422Z] Copying: 501/1024 [MB] (12 MBps) [2024-12-05T12:30:16.353Z] Copying: 518/1024 [MB] (17 MBps) [2024-12-05T12:30:17.285Z] Copying: 533/1024 [MB] (15 MBps) [2024-12-05T12:30:18.664Z] Copying: 549/1024 [MB] (15 MBps) [2024-12-05T12:30:19.598Z] Copying: 561/1024 [MB] (12 MBps) [2024-12-05T12:30:20.532Z] Copying: 574/1024 [MB] (12 MBps) [2024-12-05T12:30:21.464Z] Copying: 590/1024 [MB] (16 MBps) [2024-12-05T12:30:22.403Z] Copying: 606/1024 [MB] (15 MBps) [2024-12-05T12:30:23.343Z] Copying: 617/1024 [MB] (11 MBps) [2024-12-05T12:30:24.282Z] Copying: 629/1024 [MB] (11 MBps) [2024-12-05T12:30:25.660Z] Copying: 642/1024 [MB] (12 MBps) [2024-12-05T12:30:26.602Z] Copying: 654/1024 [MB] (11 MBps) [2024-12-05T12:30:27.545Z] Copying: 668/1024 [MB] (14 MBps) [2024-12-05T12:30:28.573Z] Copying: 679/1024 [MB] (10 MBps) [2024-12-05T12:30:29.510Z] Copying: 690/1024 [MB] (10 MBps) [2024-12-05T12:30:30.462Z] Copying: 702/1024 [MB] (11 MBps) [2024-12-05T12:30:31.406Z] Copying: 717/1024 [MB] (14 MBps) [2024-12-05T12:30:32.351Z] Copying: 727/1024 [MB] (10 MBps) [2024-12-05T12:30:33.292Z] Copying: 738/1024 [MB] (10 MBps) [2024-12-05T12:30:34.679Z] Copying: 749/1024 [MB] (11 MBps) [2024-12-05T12:30:35.630Z] Copying: 760/1024 [MB] (10 MBps) [2024-12-05T12:30:36.572Z] Copying: 771/1024 [MB] (11 MBps) [2024-12-05T12:30:37.515Z] Copying: 782/1024 [MB] (10 MBps) [2024-12-05T12:30:38.459Z] Copying: 792/1024 [MB] (10 MBps) [2024-12-05T12:30:39.404Z] Copying: 803/1024 [MB] (10 MBps) [2024-12-05T12:30:40.347Z] Copying: 815/1024 [MB] (11 MBps) [2024-12-05T12:30:41.289Z] Copying: 826/1024 [MB] (10 MBps) [2024-12-05T12:30:42.671Z] Copying: 836/1024 [MB] (10 MBps) [2024-12-05T12:30:43.605Z] Copying: 848/1024 [MB] (11 MBps) [2024-12-05T12:30:44.538Z] Copying: 859/1024 [MB] (11 MBps) [2024-12-05T12:30:45.480Z] Copying: 874/1024 [MB] (14 MBps) [2024-12-05T12:30:46.419Z] Copying: 885/1024 [MB] (10 MBps) [2024-12-05T12:30:47.352Z] Copying: 896/1024 [MB] (11 MBps) [2024-12-05T12:30:48.285Z] Copying: 912/1024 [MB] (16 MBps) [2024-12-05T12:30:49.662Z] Copying: 927/1024 [MB] (15 MBps) [2024-12-05T12:30:50.599Z] Copying: 942/1024 [MB] (14 MBps) [2024-12-05T12:30:51.535Z] Copying: 954/1024 [MB] (11 MBps) [2024-12-05T12:30:52.474Z] Copying: 968/1024 [MB] (14 MBps) [2024-12-05T12:30:53.413Z] Copying: 979/1024 [MB] (10 MBps) [2024-12-05T12:30:54.350Z] Copying: 991/1024 [MB] (12 MBps) [2024-12-05T12:30:55.281Z] Copying: 1004/1024 [MB] (12 MBps) [2024-12-05T12:30:55.538Z] Copying: 1020/1024 [MB] (16 MBps) [2024-12-05T12:30:55.798Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-05 12:30:55.603923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.929 [2024-12-05 12:30:55.604012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:24.929 [2024-12-05 12:30:55.604036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:24.929 [2024-12-05 12:30:55.604048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.929 [2024-12-05 12:30:55.604078] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:24.929 [2024-12-05 12:30:55.607692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.929 [2024-12-05 12:30:55.607735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:24.929 [2024-12-05 12:30:55.607748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.594 ms 00:27:24.929 [2024-12-05 12:30:55.607759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.929 [2024-12-05 12:30:55.608047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.929 [2024-12-05 12:30:55.608062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:24.929 [2024-12-05 12:30:55.608074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:27:24.929 [2024-12-05 12:30:55.608089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.929 [2024-12-05 12:30:55.614783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.929 [2024-12-05 12:30:55.614819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:24.929 [2024-12-05 12:30:55.614830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.674 ms 00:27:24.929 [2024-12-05 12:30:55.614839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.929 [2024-12-05 12:30:55.621012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.929 [2024-12-05 12:30:55.621045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:24.929 [2024-12-05 12:30:55.621055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.136 ms 00:27:24.929 [2024-12-05 12:30:55.621068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.929 [2024-12-05 12:30:55.646041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.929 [2024-12-05 12:30:55.646078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:24.929 [2024-12-05 12:30:55.646089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.929 ms 00:27:24.929 [2024-12-05 12:30:55.646098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.929 [2024-12-05 12:30:55.661214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.929 [2024-12-05 12:30:55.661249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:24.929 [2024-12-05 12:30:55.661262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.080 ms 00:27:24.929 [2024-12-05 12:30:55.661270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.189 [2024-12-05 12:30:56.022243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.189 [2024-12-05 12:30:56.022316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:25.189 [2024-12-05 12:30:56.022333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 360.929 ms 00:27:25.189 [2024-12-05 12:30:56.022344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.189 [2024-12-05 12:30:56.048677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.189 [2024-12-05 12:30:56.048728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:25.189 [2024-12-05 12:30:56.048742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.315 ms 00:27:25.189 [2024-12-05 12:30:56.048751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.450 [2024-12-05 12:30:56.074693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.450 [2024-12-05 12:30:56.074744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:25.450 [2024-12-05 12:30:56.074758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.892 ms 00:27:25.450 [2024-12-05 12:30:56.074767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.450 [2024-12-05 12:30:56.099993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.450 [2024-12-05 12:30:56.100043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:25.450 [2024-12-05 12:30:56.100057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.176 ms 00:27:25.450 [2024-12-05 12:30:56.100065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.450 [2024-12-05 12:30:56.125033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.450 [2024-12-05 12:30:56.125080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:25.450 [2024-12-05 12:30:56.125093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.889 ms 00:27:25.450 [2024-12-05 12:30:56.125101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.450 [2024-12-05 12:30:56.125147] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:25.450 [2024-12-05 12:30:56.125166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:27:25.450 [2024-12-05 12:30:56.125179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:25.450 [2024-12-05 12:30:56.125590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.125991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.126000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.126009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.126018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.126028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.126037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.126046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.126054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.126064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:25.451 [2024-12-05 12:30:56.126081] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:25.451 [2024-12-05 12:30:56.126092] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 82e30c13-829a-4c2c-aff3-a48d611571c4 00:27:25.451 [2024-12-05 12:30:56.126104] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:27:25.451 [2024-12-05 12:30:56.126112] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 35520 00:27:25.451 [2024-12-05 12:30:56.126120] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 34560 00:27:25.451 [2024-12-05 12:30:56.126130] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0278 00:27:25.451 [2024-12-05 12:30:56.126142] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:25.451 [2024-12-05 12:30:56.126158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:25.451 [2024-12-05 12:30:56.126168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:25.451 [2024-12-05 12:30:56.126175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:25.451 [2024-12-05 12:30:56.126183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:25.451 [2024-12-05 12:30:56.126191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.451 [2024-12-05 12:30:56.126200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:25.451 [2024-12-05 12:30:56.126209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:27:25.451 [2024-12-05 12:30:56.126216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.451 [2024-12-05 12:30:56.140658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.451 [2024-12-05 12:30:56.140705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:25.451 [2024-12-05 12:30:56.140726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.422 ms 00:27:25.451 [2024-12-05 12:30:56.140735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.451 [2024-12-05 12:30:56.141194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.451 [2024-12-05 12:30:56.141260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:25.451 [2024-12-05 12:30:56.141271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:27:25.451 [2024-12-05 12:30:56.141281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.451 [2024-12-05 12:30:56.181289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.451 [2024-12-05 12:30:56.181344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.451 [2024-12-05 12:30:56.181356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.451 [2024-12-05 12:30:56.181365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.451 [2024-12-05 12:30:56.181435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.451 [2024-12-05 12:30:56.181445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.451 [2024-12-05 12:30:56.181455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.451 [2024-12-05 12:30:56.181481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.451 [2024-12-05 12:30:56.181550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.451 [2024-12-05 12:30:56.181565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.451 [2024-12-05 12:30:56.181581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.451 [2024-12-05 12:30:56.181590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.451 [2024-12-05 12:30:56.181607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.451 [2024-12-05 12:30:56.181616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.451 [2024-12-05 12:30:56.181627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.451 [2024-12-05 12:30:56.181635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.451 [2024-12-05 12:30:56.273937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.451 [2024-12-05 12:30:56.274005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:25.451 [2024-12-05 12:30:56.274020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.451 [2024-12-05 12:30:56.274030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.711 [2024-12-05 12:30:56.348948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.712 [2024-12-05 12:30:56.349017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.712 [2024-12-05 12:30:56.349033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.712 [2024-12-05 12:30:56.349045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.712 [2024-12-05 12:30:56.349155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.712 [2024-12-05 12:30:56.349169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:25.712 [2024-12-05 12:30:56.349180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.712 [2024-12-05 12:30:56.349196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.712 [2024-12-05 12:30:56.349244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.712 [2024-12-05 12:30:56.349256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:25.712 [2024-12-05 12:30:56.349266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.712 [2024-12-05 12:30:56.349275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.712 [2024-12-05 12:30:56.349385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.712 [2024-12-05 12:30:56.349409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:25.712 [2024-12-05 12:30:56.349417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.712 [2024-12-05 12:30:56.349426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.712 [2024-12-05 12:30:56.349498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.712 [2024-12-05 12:30:56.349520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:25.712 [2024-12-05 12:30:56.349530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.712 [2024-12-05 12:30:56.349539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.712 [2024-12-05 12:30:56.349595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.712 [2024-12-05 12:30:56.349608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:25.712 [2024-12-05 12:30:56.349617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.712 [2024-12-05 12:30:56.349627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.712 [2024-12-05 12:30:56.349686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.712 [2024-12-05 12:30:56.349709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:25.712 [2024-12-05 12:30:56.349719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.712 [2024-12-05 12:30:56.349728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.712 [2024-12-05 12:30:56.349904] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 745.932 ms, result 0 00:27:26.650 00:27:26.650 00:27:26.650 12:30:57 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:28.559 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:28.559 12:30:59 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:28.559 12:30:59 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:27:28.559 12:30:59 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:28.559 12:30:59 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:28.559 12:30:59 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:28.821 12:30:59 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77699 00:27:28.821 12:30:59 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77699 ']' 00:27:28.821 12:30:59 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77699 00:27:28.821 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77699) - No such process 00:27:28.821 Process with pid 77699 is not found 00:27:28.821 12:30:59 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77699 is not found' 00:27:28.821 12:30:59 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:27:28.821 Remove shared memory files 00:27:28.821 12:30:59 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:28.821 12:30:59 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:27:28.821 12:30:59 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:27:28.821 12:30:59 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:27:28.821 12:30:59 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:28.821 12:30:59 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:27:28.821 00:27:28.821 real 5m5.783s 00:27:28.821 user 4m53.595s 00:27:28.821 sys 0m12.033s 00:27:28.821 ************************************ 00:27:28.821 END TEST ftl_restore 00:27:28.821 ************************************ 00:27:28.821 12:30:59 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:28.821 12:30:59 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:28.821 12:30:59 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:28.821 12:30:59 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:28.821 12:30:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:28.821 12:30:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:28.821 ************************************ 00:27:28.821 START TEST ftl_dirty_shutdown 00:27:28.821 ************************************ 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:28.821 * Looking for test storage... 00:27:28.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:28.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.821 --rc genhtml_branch_coverage=1 00:27:28.821 --rc genhtml_function_coverage=1 00:27:28.821 --rc genhtml_legend=1 00:27:28.821 --rc geninfo_all_blocks=1 00:27:28.821 --rc geninfo_unexecuted_blocks=1 00:27:28.821 00:27:28.821 ' 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:28.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.821 --rc genhtml_branch_coverage=1 00:27:28.821 --rc genhtml_function_coverage=1 00:27:28.821 --rc genhtml_legend=1 00:27:28.821 --rc geninfo_all_blocks=1 00:27:28.821 --rc geninfo_unexecuted_blocks=1 00:27:28.821 00:27:28.821 ' 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:28.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.821 --rc genhtml_branch_coverage=1 00:27:28.821 --rc genhtml_function_coverage=1 00:27:28.821 --rc genhtml_legend=1 00:27:28.821 --rc geninfo_all_blocks=1 00:27:28.821 --rc geninfo_unexecuted_blocks=1 00:27:28.821 00:27:28.821 ' 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:28.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.821 --rc genhtml_branch_coverage=1 00:27:28.821 --rc genhtml_function_coverage=1 00:27:28.821 --rc genhtml_legend=1 00:27:28.821 --rc geninfo_all_blocks=1 00:27:28.821 --rc geninfo_unexecuted_blocks=1 00:27:28.821 00:27:28.821 ' 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:28.821 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80939 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80939 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80939 ']' 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:28.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:28.822 12:30:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:29.080 [2024-12-05 12:30:59.783412] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:27:29.080 [2024-12-05 12:30:59.783576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80939 ] 00:27:29.080 [2024-12-05 12:30:59.946738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.339 [2024-12-05 12:31:00.089531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.390 12:31:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.390 12:31:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:30.390 12:31:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:30.390 12:31:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:30.390 12:31:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:30.390 12:31:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:30.390 12:31:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:30.390 12:31:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:30.390 12:31:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:30.390 12:31:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:30.390 12:31:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:30.390 12:31:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:30.390 12:31:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:30.390 12:31:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:30.390 12:31:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:30.390 12:31:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:30.649 12:31:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:30.649 { 00:27:30.649 "name": "nvme0n1", 00:27:30.649 "aliases": [ 00:27:30.649 "34ee3854-e838-4cde-b349-4fc86355a7d9" 00:27:30.649 ], 00:27:30.649 "product_name": "NVMe disk", 00:27:30.649 "block_size": 4096, 00:27:30.649 "num_blocks": 1310720, 00:27:30.649 "uuid": "34ee3854-e838-4cde-b349-4fc86355a7d9", 00:27:30.649 "numa_id": -1, 00:27:30.649 "assigned_rate_limits": { 00:27:30.649 "rw_ios_per_sec": 0, 00:27:30.649 "rw_mbytes_per_sec": 0, 00:27:30.649 "r_mbytes_per_sec": 0, 00:27:30.649 "w_mbytes_per_sec": 0 00:27:30.649 }, 00:27:30.649 "claimed": true, 00:27:30.649 "claim_type": "read_many_write_one", 00:27:30.649 "zoned": false, 00:27:30.649 "supported_io_types": { 00:27:30.649 "read": true, 00:27:30.649 "write": true, 00:27:30.649 "unmap": true, 00:27:30.649 "flush": true, 00:27:30.649 "reset": true, 00:27:30.649 "nvme_admin": true, 00:27:30.649 "nvme_io": true, 00:27:30.649 "nvme_io_md": false, 00:27:30.649 "write_zeroes": true, 00:27:30.649 "zcopy": false, 00:27:30.649 "get_zone_info": false, 00:27:30.649 "zone_management": false, 00:27:30.649 "zone_append": false, 00:27:30.649 "compare": true, 00:27:30.649 "compare_and_write": false, 00:27:30.649 "abort": true, 00:27:30.649 "seek_hole": false, 00:27:30.649 "seek_data": false, 00:27:30.649 "copy": true, 00:27:30.649 "nvme_iov_md": false 00:27:30.649 }, 00:27:30.649 "driver_specific": { 00:27:30.649 "nvme": [ 00:27:30.649 { 00:27:30.649 "pci_address": "0000:00:11.0", 00:27:30.649 "trid": { 00:27:30.649 "trtype": "PCIe", 00:27:30.649 "traddr": "0000:00:11.0" 00:27:30.649 }, 00:27:30.649 "ctrlr_data": { 00:27:30.649 "cntlid": 0, 00:27:30.649 "vendor_id": "0x1b36", 00:27:30.649 "model_number": "QEMU NVMe Ctrl", 00:27:30.649 "serial_number": "12341", 00:27:30.649 "firmware_revision": "8.0.0", 00:27:30.649 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:30.649 "oacs": { 00:27:30.649 "security": 0, 00:27:30.649 "format": 1, 00:27:30.649 "firmware": 0, 00:27:30.649 "ns_manage": 1 00:27:30.649 }, 00:27:30.649 "multi_ctrlr": false, 00:27:30.649 "ana_reporting": false 00:27:30.649 }, 00:27:30.649 "vs": { 00:27:30.649 "nvme_version": "1.4" 00:27:30.649 }, 00:27:30.649 "ns_data": { 00:27:30.649 "id": 1, 00:27:30.649 "can_share": false 00:27:30.649 } 00:27:30.649 } 00:27:30.649 ], 00:27:30.649 "mp_policy": "active_passive" 00:27:30.649 } 00:27:30.649 } 00:27:30.649 ]' 00:27:30.649 12:31:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:30.649 12:31:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:30.649 12:31:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:30.649 12:31:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:30.649 12:31:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:30.649 12:31:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:30.649 12:31:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:30.649 12:31:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:30.649 12:31:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:30.649 12:31:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:30.649 12:31:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:30.909 12:31:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=836b3555-8fb1-4cfc-8aa1-c48e5a983523 00:27:30.909 12:31:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:30.909 12:31:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 836b3555-8fb1-4cfc-8aa1-c48e5a983523 00:27:31.167 12:31:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:31.426 12:31:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=0c132bd4-defb-4f31-a8ff-77f24ab4abc6 00:27:31.426 12:31:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0c132bd4-defb-4f31-a8ff-77f24ab4abc6 00:27:31.685 12:31:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=40b2497c-72d3-493c-8877-7562964548a7 00:27:31.685 12:31:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:31.685 12:31:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 40b2497c-72d3-493c-8877-7562964548a7 00:27:31.685 12:31:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:31.685 12:31:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:31.685 12:31:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=40b2497c-72d3-493c-8877-7562964548a7 00:27:31.685 12:31:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:31.685 12:31:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 40b2497c-72d3-493c-8877-7562964548a7 00:27:31.685 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=40b2497c-72d3-493c-8877-7562964548a7 00:27:31.685 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:31.686 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:31.686 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:31.686 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 40b2497c-72d3-493c-8877-7562964548a7 00:27:31.943 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:31.943 { 00:27:31.943 "name": "40b2497c-72d3-493c-8877-7562964548a7", 00:27:31.943 "aliases": [ 00:27:31.943 "lvs/nvme0n1p0" 00:27:31.943 ], 00:27:31.943 "product_name": "Logical Volume", 00:27:31.943 "block_size": 4096, 00:27:31.943 "num_blocks": 26476544, 00:27:31.943 "uuid": "40b2497c-72d3-493c-8877-7562964548a7", 00:27:31.943 "assigned_rate_limits": { 00:27:31.943 "rw_ios_per_sec": 0, 00:27:31.943 "rw_mbytes_per_sec": 0, 00:27:31.943 "r_mbytes_per_sec": 0, 00:27:31.943 "w_mbytes_per_sec": 0 00:27:31.943 }, 00:27:31.943 "claimed": false, 00:27:31.943 "zoned": false, 00:27:31.943 "supported_io_types": { 00:27:31.943 "read": true, 00:27:31.943 "write": true, 00:27:31.943 "unmap": true, 00:27:31.943 "flush": false, 00:27:31.943 "reset": true, 00:27:31.943 "nvme_admin": false, 00:27:31.943 "nvme_io": false, 00:27:31.943 "nvme_io_md": false, 00:27:31.943 "write_zeroes": true, 00:27:31.943 "zcopy": false, 00:27:31.943 "get_zone_info": false, 00:27:31.943 "zone_management": false, 00:27:31.943 "zone_append": false, 00:27:31.943 "compare": false, 00:27:31.943 "compare_and_write": false, 00:27:31.943 "abort": false, 00:27:31.943 "seek_hole": true, 00:27:31.943 "seek_data": true, 00:27:31.943 "copy": false, 00:27:31.943 "nvme_iov_md": false 00:27:31.943 }, 00:27:31.943 "driver_specific": { 00:27:31.943 "lvol": { 00:27:31.943 "lvol_store_uuid": "0c132bd4-defb-4f31-a8ff-77f24ab4abc6", 00:27:31.943 "base_bdev": "nvme0n1", 00:27:31.943 "thin_provision": true, 00:27:31.943 "num_allocated_clusters": 0, 00:27:31.943 "snapshot": false, 00:27:31.943 "clone": false, 00:27:31.943 "esnap_clone": false 00:27:31.943 } 00:27:31.943 } 00:27:31.943 } 00:27:31.943 ]' 00:27:31.943 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:31.943 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:31.943 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:31.943 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:31.943 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:31.943 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:31.943 12:31:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:27:31.943 12:31:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:31.943 12:31:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:32.200 12:31:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:32.200 12:31:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:32.200 12:31:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 40b2497c-72d3-493c-8877-7562964548a7 00:27:32.200 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=40b2497c-72d3-493c-8877-7562964548a7 00:27:32.200 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:32.200 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:32.200 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:32.200 12:31:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 40b2497c-72d3-493c-8877-7562964548a7 00:27:32.456 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:32.456 { 00:27:32.456 "name": "40b2497c-72d3-493c-8877-7562964548a7", 00:27:32.456 "aliases": [ 00:27:32.456 "lvs/nvme0n1p0" 00:27:32.456 ], 00:27:32.456 "product_name": "Logical Volume", 00:27:32.456 "block_size": 4096, 00:27:32.456 "num_blocks": 26476544, 00:27:32.456 "uuid": "40b2497c-72d3-493c-8877-7562964548a7", 00:27:32.456 "assigned_rate_limits": { 00:27:32.456 "rw_ios_per_sec": 0, 00:27:32.456 "rw_mbytes_per_sec": 0, 00:27:32.456 "r_mbytes_per_sec": 0, 00:27:32.456 "w_mbytes_per_sec": 0 00:27:32.456 }, 00:27:32.456 "claimed": false, 00:27:32.456 "zoned": false, 00:27:32.456 "supported_io_types": { 00:27:32.456 "read": true, 00:27:32.456 "write": true, 00:27:32.456 "unmap": true, 00:27:32.456 "flush": false, 00:27:32.456 "reset": true, 00:27:32.456 "nvme_admin": false, 00:27:32.456 "nvme_io": false, 00:27:32.456 "nvme_io_md": false, 00:27:32.456 "write_zeroes": true, 00:27:32.456 "zcopy": false, 00:27:32.456 "get_zone_info": false, 00:27:32.456 "zone_management": false, 00:27:32.456 "zone_append": false, 00:27:32.456 "compare": false, 00:27:32.457 "compare_and_write": false, 00:27:32.457 "abort": false, 00:27:32.457 "seek_hole": true, 00:27:32.457 "seek_data": true, 00:27:32.457 "copy": false, 00:27:32.457 "nvme_iov_md": false 00:27:32.457 }, 00:27:32.457 "driver_specific": { 00:27:32.457 "lvol": { 00:27:32.457 "lvol_store_uuid": "0c132bd4-defb-4f31-a8ff-77f24ab4abc6", 00:27:32.457 "base_bdev": "nvme0n1", 00:27:32.457 "thin_provision": true, 00:27:32.457 "num_allocated_clusters": 0, 00:27:32.457 "snapshot": false, 00:27:32.457 "clone": false, 00:27:32.457 "esnap_clone": false 00:27:32.457 } 00:27:32.457 } 00:27:32.457 } 00:27:32.457 ]' 00:27:32.457 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:32.457 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:32.457 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:32.457 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:32.457 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:32.457 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:32.457 12:31:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:27:32.457 12:31:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:32.713 12:31:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:27:32.713 12:31:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 40b2497c-72d3-493c-8877-7562964548a7 00:27:32.713 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=40b2497c-72d3-493c-8877-7562964548a7 00:27:32.713 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:32.713 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:32.713 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:32.713 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 40b2497c-72d3-493c-8877-7562964548a7 00:27:32.969 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:32.969 { 00:27:32.969 "name": "40b2497c-72d3-493c-8877-7562964548a7", 00:27:32.969 "aliases": [ 00:27:32.969 "lvs/nvme0n1p0" 00:27:32.969 ], 00:27:32.969 "product_name": "Logical Volume", 00:27:32.969 "block_size": 4096, 00:27:32.969 "num_blocks": 26476544, 00:27:32.969 "uuid": "40b2497c-72d3-493c-8877-7562964548a7", 00:27:32.969 "assigned_rate_limits": { 00:27:32.969 "rw_ios_per_sec": 0, 00:27:32.969 "rw_mbytes_per_sec": 0, 00:27:32.969 "r_mbytes_per_sec": 0, 00:27:32.969 "w_mbytes_per_sec": 0 00:27:32.969 }, 00:27:32.969 "claimed": false, 00:27:32.969 "zoned": false, 00:27:32.969 "supported_io_types": { 00:27:32.969 "read": true, 00:27:32.969 "write": true, 00:27:32.969 "unmap": true, 00:27:32.969 "flush": false, 00:27:32.969 "reset": true, 00:27:32.969 "nvme_admin": false, 00:27:32.969 "nvme_io": false, 00:27:32.969 "nvme_io_md": false, 00:27:32.969 "write_zeroes": true, 00:27:32.969 "zcopy": false, 00:27:32.970 "get_zone_info": false, 00:27:32.970 "zone_management": false, 00:27:32.970 "zone_append": false, 00:27:32.970 "compare": false, 00:27:32.970 "compare_and_write": false, 00:27:32.970 "abort": false, 00:27:32.970 "seek_hole": true, 00:27:32.970 "seek_data": true, 00:27:32.970 "copy": false, 00:27:32.970 "nvme_iov_md": false 00:27:32.970 }, 00:27:32.970 "driver_specific": { 00:27:32.970 "lvol": { 00:27:32.970 "lvol_store_uuid": "0c132bd4-defb-4f31-a8ff-77f24ab4abc6", 00:27:32.970 "base_bdev": "nvme0n1", 00:27:32.970 "thin_provision": true, 00:27:32.970 "num_allocated_clusters": 0, 00:27:32.970 "snapshot": false, 00:27:32.970 "clone": false, 00:27:32.970 "esnap_clone": false 00:27:32.970 } 00:27:32.970 } 00:27:32.970 } 00:27:32.970 ]' 00:27:32.970 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:32.970 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:32.970 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:32.970 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:32.970 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:32.970 12:31:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:32.970 12:31:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:27:32.970 12:31:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 40b2497c-72d3-493c-8877-7562964548a7 --l2p_dram_limit 10' 00:27:32.970 12:31:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:27:32.970 12:31:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:27:32.970 12:31:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:32.970 12:31:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 40b2497c-72d3-493c-8877-7562964548a7 --l2p_dram_limit 10 -c nvc0n1p0 00:27:33.227 [2024-12-05 12:31:03.855576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.227 [2024-12-05 12:31:03.855620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:33.227 [2024-12-05 12:31:03.855635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:33.227 [2024-12-05 12:31:03.855643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.227 [2024-12-05 12:31:03.855694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.227 [2024-12-05 12:31:03.855703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:33.227 [2024-12-05 12:31:03.855711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:33.227 [2024-12-05 12:31:03.855718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.227 [2024-12-05 12:31:03.855739] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:33.227 [2024-12-05 12:31:03.856311] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:33.227 [2024-12-05 12:31:03.856329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.227 [2024-12-05 12:31:03.856336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:33.227 [2024-12-05 12:31:03.856345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:27:33.227 [2024-12-05 12:31:03.856351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.227 [2024-12-05 12:31:03.856382] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 213de987-76bc-4387-96cd-ea9d7da49553 00:27:33.227 [2024-12-05 12:31:03.857726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.227 [2024-12-05 12:31:03.857754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:33.227 [2024-12-05 12:31:03.857764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:33.227 [2024-12-05 12:31:03.857776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.227 [2024-12-05 12:31:03.864780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.227 [2024-12-05 12:31:03.864812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:33.227 [2024-12-05 12:31:03.864821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.961 ms 00:27:33.227 [2024-12-05 12:31:03.864828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.227 [2024-12-05 12:31:03.864997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.227 [2024-12-05 12:31:03.865010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:33.227 [2024-12-05 12:31:03.865017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:33.227 [2024-12-05 12:31:03.865028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.227 [2024-12-05 12:31:03.865063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.227 [2024-12-05 12:31:03.865073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:33.227 [2024-12-05 12:31:03.865082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:33.227 [2024-12-05 12:31:03.865090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.227 [2024-12-05 12:31:03.865108] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:33.227 [2024-12-05 12:31:03.868382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.227 [2024-12-05 12:31:03.868408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:33.227 [2024-12-05 12:31:03.868421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.278 ms 00:27:33.227 [2024-12-05 12:31:03.868427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.227 [2024-12-05 12:31:03.868457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.227 [2024-12-05 12:31:03.868473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:33.227 [2024-12-05 12:31:03.868481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:33.227 [2024-12-05 12:31:03.868487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.227 [2024-12-05 12:31:03.868512] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:33.227 [2024-12-05 12:31:03.868628] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:33.227 [2024-12-05 12:31:03.868646] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:33.227 [2024-12-05 12:31:03.868656] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:33.227 [2024-12-05 12:31:03.868666] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:33.227 [2024-12-05 12:31:03.868673] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:33.227 [2024-12-05 12:31:03.868682] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:33.227 [2024-12-05 12:31:03.868688] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:33.227 [2024-12-05 12:31:03.868697] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:33.228 [2024-12-05 12:31:03.868703] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:33.228 [2024-12-05 12:31:03.868712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.228 [2024-12-05 12:31:03.868723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:33.228 [2024-12-05 12:31:03.868731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:27:33.228 [2024-12-05 12:31:03.868737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.228 [2024-12-05 12:31:03.868805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.228 [2024-12-05 12:31:03.868812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:33.228 [2024-12-05 12:31:03.868820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:33.228 [2024-12-05 12:31:03.868826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.228 [2024-12-05 12:31:03.868917] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:33.228 [2024-12-05 12:31:03.868926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:33.228 [2024-12-05 12:31:03.868934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:33.228 [2024-12-05 12:31:03.868942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.228 [2024-12-05 12:31:03.868949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:33.228 [2024-12-05 12:31:03.868955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:33.228 [2024-12-05 12:31:03.868963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:33.228 [2024-12-05 12:31:03.868969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:33.228 [2024-12-05 12:31:03.868977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:33.228 [2024-12-05 12:31:03.868983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:33.228 [2024-12-05 12:31:03.868991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:33.228 [2024-12-05 12:31:03.868997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:33.228 [2024-12-05 12:31:03.869004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:33.228 [2024-12-05 12:31:03.869010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:33.228 [2024-12-05 12:31:03.869018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:33.228 [2024-12-05 12:31:03.869028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.228 [2024-12-05 12:31:03.869037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:33.228 [2024-12-05 12:31:03.869043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:33.228 [2024-12-05 12:31:03.869052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.228 [2024-12-05 12:31:03.869057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:33.228 [2024-12-05 12:31:03.869064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:33.228 [2024-12-05 12:31:03.869070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.228 [2024-12-05 12:31:03.869077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:33.228 [2024-12-05 12:31:03.869084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:33.228 [2024-12-05 12:31:03.869091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.228 [2024-12-05 12:31:03.869097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:33.228 [2024-12-05 12:31:03.869104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:33.228 [2024-12-05 12:31:03.869110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.228 [2024-12-05 12:31:03.869117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:33.228 [2024-12-05 12:31:03.869122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:33.228 [2024-12-05 12:31:03.869129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.228 [2024-12-05 12:31:03.869136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:33.228 [2024-12-05 12:31:03.869144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:33.228 [2024-12-05 12:31:03.869149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:33.228 [2024-12-05 12:31:03.869156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:33.228 [2024-12-05 12:31:03.869162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:33.228 [2024-12-05 12:31:03.869169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:33.228 [2024-12-05 12:31:03.869174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:33.228 [2024-12-05 12:31:03.869181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:33.228 [2024-12-05 12:31:03.869187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.228 [2024-12-05 12:31:03.869194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:33.228 [2024-12-05 12:31:03.869199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:33.228 [2024-12-05 12:31:03.869208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.228 [2024-12-05 12:31:03.869215] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:33.228 [2024-12-05 12:31:03.869223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:33.228 [2024-12-05 12:31:03.869229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:33.228 [2024-12-05 12:31:03.869238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.228 [2024-12-05 12:31:03.869244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:33.228 [2024-12-05 12:31:03.869253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:33.228 [2024-12-05 12:31:03.869259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:33.228 [2024-12-05 12:31:03.869266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:33.228 [2024-12-05 12:31:03.869272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:33.228 [2024-12-05 12:31:03.869279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:33.228 [2024-12-05 12:31:03.869288] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:33.228 [2024-12-05 12:31:03.869303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:33.228 [2024-12-05 12:31:03.869311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:33.228 [2024-12-05 12:31:03.869318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:33.228 [2024-12-05 12:31:03.869324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:33.228 [2024-12-05 12:31:03.869333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:33.228 [2024-12-05 12:31:03.869340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:33.228 [2024-12-05 12:31:03.869348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:33.228 [2024-12-05 12:31:03.869354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:33.228 [2024-12-05 12:31:03.869364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:33.228 [2024-12-05 12:31:03.869370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:33.228 [2024-12-05 12:31:03.869378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:33.228 [2024-12-05 12:31:03.869385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:33.228 [2024-12-05 12:31:03.869394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:33.229 [2024-12-05 12:31:03.869400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:33.229 [2024-12-05 12:31:03.869408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:33.229 [2024-12-05 12:31:03.869418] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:33.229 [2024-12-05 12:31:03.869427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:33.229 [2024-12-05 12:31:03.869434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:33.229 [2024-12-05 12:31:03.869442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:33.229 [2024-12-05 12:31:03.869448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:33.229 [2024-12-05 12:31:03.869457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:33.229 [2024-12-05 12:31:03.869476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.229 [2024-12-05 12:31:03.869485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:33.229 [2024-12-05 12:31:03.869491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:27:33.229 [2024-12-05 12:31:03.869500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.229 [2024-12-05 12:31:03.869543] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:33.229 [2024-12-05 12:31:03.869555] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:37.424 [2024-12-05 12:31:07.707114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.424 [2024-12-05 12:31:07.707227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:37.424 [2024-12-05 12:31:07.707247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3837.554 ms 00:27:37.424 [2024-12-05 12:31:07.707260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.424 [2024-12-05 12:31:07.745681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.424 [2024-12-05 12:31:07.745763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:37.424 [2024-12-05 12:31:07.745781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.115 ms 00:27:37.424 [2024-12-05 12:31:07.745793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.424 [2024-12-05 12:31:07.745960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.424 [2024-12-05 12:31:07.745976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:37.424 [2024-12-05 12:31:07.745987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:27:37.424 [2024-12-05 12:31:07.746006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.424 [2024-12-05 12:31:07.786354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.424 [2024-12-05 12:31:07.786420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:37.424 [2024-12-05 12:31:07.786433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.292 ms 00:27:37.424 [2024-12-05 12:31:07.786446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.424 [2024-12-05 12:31:07.786501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.424 [2024-12-05 12:31:07.786518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:37.424 [2024-12-05 12:31:07.786528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:37.424 [2024-12-05 12:31:07.786549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.424 [2024-12-05 12:31:07.787274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.424 [2024-12-05 12:31:07.787330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:37.424 [2024-12-05 12:31:07.787342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:27:37.424 [2024-12-05 12:31:07.787353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.424 [2024-12-05 12:31:07.787499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.424 [2024-12-05 12:31:07.787517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:37.424 [2024-12-05 12:31:07.787531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:27:37.424 [2024-12-05 12:31:07.787544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.424 [2024-12-05 12:31:07.808305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.424 [2024-12-05 12:31:07.808359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:37.424 [2024-12-05 12:31:07.808371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.736 ms 00:27:37.424 [2024-12-05 12:31:07.808382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.424 [2024-12-05 12:31:07.837549] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:37.424 [2024-12-05 12:31:07.842658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.424 [2024-12-05 12:31:07.842705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:37.424 [2024-12-05 12:31:07.842722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.157 ms 00:27:37.424 [2024-12-05 12:31:07.842731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.424 [2024-12-05 12:31:07.951760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.424 [2024-12-05 12:31:07.951826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:37.424 [2024-12-05 12:31:07.951846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.971 ms 00:27:37.425 [2024-12-05 12:31:07.951858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.425 [2024-12-05 12:31:07.952089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.425 [2024-12-05 12:31:07.952108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:37.425 [2024-12-05 12:31:07.952125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:27:37.425 [2024-12-05 12:31:07.952135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.425 [2024-12-05 12:31:07.979045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.425 [2024-12-05 12:31:07.979099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:37.425 [2024-12-05 12:31:07.979117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.826 ms 00:27:37.425 [2024-12-05 12:31:07.979126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.425 [2024-12-05 12:31:08.004563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.425 [2024-12-05 12:31:08.004613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:37.425 [2024-12-05 12:31:08.004629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.373 ms 00:27:37.425 [2024-12-05 12:31:08.004637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.425 [2024-12-05 12:31:08.005329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.425 [2024-12-05 12:31:08.005358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:37.425 [2024-12-05 12:31:08.005372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:27:37.425 [2024-12-05 12:31:08.005385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.425 [2024-12-05 12:31:08.099928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.425 [2024-12-05 12:31:08.099984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:37.425 [2024-12-05 12:31:08.100006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.493 ms 00:27:37.425 [2024-12-05 12:31:08.100016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.425 [2024-12-05 12:31:08.128692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.425 [2024-12-05 12:31:08.128748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:37.425 [2024-12-05 12:31:08.128767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.570 ms 00:27:37.425 [2024-12-05 12:31:08.128777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.425 [2024-12-05 12:31:08.155522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.425 [2024-12-05 12:31:08.155575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:37.425 [2024-12-05 12:31:08.155590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.685 ms 00:27:37.425 [2024-12-05 12:31:08.155598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.425 [2024-12-05 12:31:08.182427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.425 [2024-12-05 12:31:08.182493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:37.425 [2024-12-05 12:31:08.182509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.770 ms 00:27:37.425 [2024-12-05 12:31:08.182518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.425 [2024-12-05 12:31:08.182577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.425 [2024-12-05 12:31:08.182588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:37.425 [2024-12-05 12:31:08.182605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:37.425 [2024-12-05 12:31:08.182614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.425 [2024-12-05 12:31:08.182749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.425 [2024-12-05 12:31:08.182767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:37.425 [2024-12-05 12:31:08.182779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:37.425 [2024-12-05 12:31:08.182787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.425 [2024-12-05 12:31:08.184222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4328.036 ms, result 0 00:27:37.425 { 00:27:37.425 "name": "ftl0", 00:27:37.425 "uuid": "213de987-76bc-4387-96cd-ea9d7da49553" 00:27:37.425 } 00:27:37.425 12:31:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:37.425 12:31:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:37.687 12:31:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:37.687 12:31:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:37.687 12:31:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:37.950 /dev/nbd0 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:37.950 1+0 records in 00:27:37.950 1+0 records out 00:27:37.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551997 s, 7.4 MB/s 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:27:37.950 12:31:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:37.950 [2024-12-05 12:31:08.767518] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:27:37.950 [2024-12-05 12:31:08.767687] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81093 ] 00:27:38.212 [2024-12-05 12:31:08.938865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.212 [2024-12-05 12:31:09.063856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.601  [2024-12-05T12:31:11.415Z] Copying: 189/1024 [MB] (189 MBps) [2024-12-05T12:31:12.356Z] Copying: 391/1024 [MB] (201 MBps) [2024-12-05T12:31:13.730Z] Copying: 648/1024 [MB] (257 MBps) [2024-12-05T12:31:13.987Z] Copying: 897/1024 [MB] (248 MBps) [2024-12-05T12:31:14.555Z] Copying: 1024/1024 [MB] (average 227 MBps) 00:27:43.686 00:27:43.686 12:31:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:45.584 12:31:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:45.584 [2024-12-05 12:31:16.329534] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:27:45.584 [2024-12-05 12:31:16.329622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81176 ] 00:27:45.841 [2024-12-05 12:31:16.479172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.841 [2024-12-05 12:31:16.556388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.297  [2024-12-05T12:31:18.740Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-05T12:31:19.738Z] Copying: 36/1024 [MB] (13 MBps) [2024-12-05T12:31:21.120Z] Copying: 60/1024 [MB] (24 MBps) [2024-12-05T12:31:22.061Z] Copying: 81/1024 [MB] (21 MBps) [2024-12-05T12:31:23.005Z] Copying: 106/1024 [MB] (24 MBps) [2024-12-05T12:31:23.945Z] Copying: 129/1024 [MB] (23 MBps) [2024-12-05T12:31:24.889Z] Copying: 151/1024 [MB] (22 MBps) [2024-12-05T12:31:25.830Z] Copying: 174/1024 [MB] (22 MBps) [2024-12-05T12:31:26.771Z] Copying: 200/1024 [MB] (25 MBps) [2024-12-05T12:31:28.158Z] Copying: 223/1024 [MB] (23 MBps) [2024-12-05T12:31:28.731Z] Copying: 249/1024 [MB] (25 MBps) [2024-12-05T12:31:30.109Z] Copying: 275/1024 [MB] (25 MBps) [2024-12-05T12:31:31.047Z] Copying: 301/1024 [MB] (25 MBps) [2024-12-05T12:31:31.988Z] Copying: 320/1024 [MB] (19 MBps) [2024-12-05T12:31:32.926Z] Copying: 340/1024 [MB] (19 MBps) [2024-12-05T12:31:33.870Z] Copying: 357/1024 [MB] (16 MBps) [2024-12-05T12:31:34.810Z] Copying: 375/1024 [MB] (17 MBps) [2024-12-05T12:31:35.752Z] Copying: 395/1024 [MB] (20 MBps) [2024-12-05T12:31:37.139Z] Copying: 418/1024 [MB] (22 MBps) [2024-12-05T12:31:38.080Z] Copying: 442/1024 [MB] (23 MBps) [2024-12-05T12:31:39.047Z] Copying: 461/1024 [MB] (19 MBps) [2024-12-05T12:31:40.004Z] Copying: 480/1024 [MB] (19 MBps) [2024-12-05T12:31:40.946Z] Copying: 500/1024 [MB] (19 MBps) [2024-12-05T12:31:41.887Z] Copying: 515/1024 [MB] (15 MBps) [2024-12-05T12:31:42.826Z] Copying: 530/1024 [MB] (14 MBps) [2024-12-05T12:31:43.811Z] Copying: 547/1024 [MB] (17 MBps) [2024-12-05T12:31:44.751Z] Copying: 563/1024 [MB] (15 MBps) [2024-12-05T12:31:46.135Z] Copying: 577/1024 [MB] (14 MBps) [2024-12-05T12:31:47.077Z] Copying: 593/1024 [MB] (15 MBps) [2024-12-05T12:31:48.020Z] Copying: 612/1024 [MB] (18 MBps) [2024-12-05T12:31:48.964Z] Copying: 634/1024 [MB] (22 MBps) [2024-12-05T12:31:49.910Z] Copying: 649/1024 [MB] (14 MBps) [2024-12-05T12:31:50.853Z] Copying: 662/1024 [MB] (13 MBps) [2024-12-05T12:31:51.823Z] Copying: 676/1024 [MB] (13 MBps) [2024-12-05T12:31:52.767Z] Copying: 692/1024 [MB] (16 MBps) [2024-12-05T12:31:53.762Z] Copying: 708/1024 [MB] (15 MBps) [2024-12-05T12:31:55.144Z] Copying: 721/1024 [MB] (13 MBps) [2024-12-05T12:31:56.088Z] Copying: 735/1024 [MB] (14 MBps) [2024-12-05T12:31:57.032Z] Copying: 749/1024 [MB] (14 MBps) [2024-12-05T12:31:57.976Z] Copying: 767/1024 [MB] (18 MBps) [2024-12-05T12:31:58.919Z] Copying: 781/1024 [MB] (13 MBps) [2024-12-05T12:31:59.863Z] Copying: 801/1024 [MB] (19 MBps) [2024-12-05T12:32:00.867Z] Copying: 817/1024 [MB] (16 MBps) [2024-12-05T12:32:01.808Z] Copying: 839/1024 [MB] (21 MBps) [2024-12-05T12:32:02.748Z] Copying: 857/1024 [MB] (18 MBps) [2024-12-05T12:32:04.137Z] Copying: 875/1024 [MB] (17 MBps) [2024-12-05T12:32:05.078Z] Copying: 893/1024 [MB] (18 MBps) [2024-12-05T12:32:06.027Z] Copying: 916/1024 [MB] (22 MBps) [2024-12-05T12:32:06.968Z] Copying: 936/1024 [MB] (19 MBps) [2024-12-05T12:32:07.912Z] Copying: 953/1024 [MB] (16 MBps) [2024-12-05T12:32:08.856Z] Copying: 970/1024 [MB] (17 MBps) [2024-12-05T12:32:09.797Z] Copying: 991/1024 [MB] (20 MBps) [2024-12-05T12:32:10.741Z] Copying: 1012/1024 [MB] (21 MBps) [2024-12-05T12:32:11.003Z] Copying: 1024/1024 [MB] (average 19 MBps) 00:28:40.134 00:28:40.134 12:32:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:40.134 12:32:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:40.394 12:32:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:40.656 [2024-12-05 12:32:11.373142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.656 [2024-12-05 12:32:11.373241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:40.656 [2024-12-05 12:32:11.373261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:40.656 [2024-12-05 12:32:11.373274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.656 [2024-12-05 12:32:11.373307] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:40.656 [2024-12-05 12:32:11.376603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.656 [2024-12-05 12:32:11.376650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:40.656 [2024-12-05 12:32:11.376666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.268 ms 00:28:40.656 [2024-12-05 12:32:11.376676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.656 [2024-12-05 12:32:11.380008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.656 [2024-12-05 12:32:11.380248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:40.656 [2024-12-05 12:32:11.380278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.277 ms 00:28:40.656 [2024-12-05 12:32:11.380288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.656 [2024-12-05 12:32:11.400093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.656 [2024-12-05 12:32:11.400151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:40.656 [2024-12-05 12:32:11.400168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.766 ms 00:28:40.656 [2024-12-05 12:32:11.400178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.656 [2024-12-05 12:32:11.406683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.656 [2024-12-05 12:32:11.406880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:40.656 [2024-12-05 12:32:11.406908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.443 ms 00:28:40.656 [2024-12-05 12:32:11.406917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.656 [2024-12-05 12:32:11.435439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.656 [2024-12-05 12:32:11.435517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:40.656 [2024-12-05 12:32:11.435536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.421 ms 00:28:40.656 [2024-12-05 12:32:11.435545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.656 [2024-12-05 12:32:11.455590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.656 [2024-12-05 12:32:11.455646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:40.656 [2024-12-05 12:32:11.455667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.969 ms 00:28:40.656 [2024-12-05 12:32:11.455676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.656 [2024-12-05 12:32:11.455865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.656 [2024-12-05 12:32:11.455879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:40.656 [2024-12-05 12:32:11.455891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:28:40.656 [2024-12-05 12:32:11.455900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.656 [2024-12-05 12:32:11.483425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.656 [2024-12-05 12:32:11.483493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:40.656 [2024-12-05 12:32:11.483510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.498 ms 00:28:40.656 [2024-12-05 12:32:11.483517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.656 [2024-12-05 12:32:11.510554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.656 [2024-12-05 12:32:11.510608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:40.656 [2024-12-05 12:32:11.510623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.973 ms 00:28:40.656 [2024-12-05 12:32:11.510632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.919 [2024-12-05 12:32:11.537216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.919 [2024-12-05 12:32:11.537271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:40.919 [2024-12-05 12:32:11.537287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.516 ms 00:28:40.919 [2024-12-05 12:32:11.537295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.919 [2024-12-05 12:32:11.563854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.919 [2024-12-05 12:32:11.563907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:40.919 [2024-12-05 12:32:11.563922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.428 ms 00:28:40.919 [2024-12-05 12:32:11.563930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.919 [2024-12-05 12:32:11.563988] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:40.919 [2024-12-05 12:32:11.564006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:40.919 [2024-12-05 12:32:11.564374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.564995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.565003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.565014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.565021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.565031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.565039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.565085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:40.920 [2024-12-05 12:32:11.565102] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:40.920 [2024-12-05 12:32:11.565114] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 213de987-76bc-4387-96cd-ea9d7da49553 00:28:40.920 [2024-12-05 12:32:11.565123] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:40.920 [2024-12-05 12:32:11.565135] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:40.920 [2024-12-05 12:32:11.565146] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:40.920 [2024-12-05 12:32:11.565157] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:40.920 [2024-12-05 12:32:11.565164] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:40.920 [2024-12-05 12:32:11.565175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:40.920 [2024-12-05 12:32:11.565184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:40.920 [2024-12-05 12:32:11.565194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:40.920 [2024-12-05 12:32:11.565201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:40.920 [2024-12-05 12:32:11.565212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.920 [2024-12-05 12:32:11.565220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:40.920 [2024-12-05 12:32:11.565232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:28:40.920 [2024-12-05 12:32:11.565240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.920 [2024-12-05 12:32:11.580484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.920 [2024-12-05 12:32:11.580530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:40.920 [2024-12-05 12:32:11.580545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.153 ms 00:28:40.920 [2024-12-05 12:32:11.580554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.920 [2024-12-05 12:32:11.581043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.920 [2024-12-05 12:32:11.581058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:40.920 [2024-12-05 12:32:11.581071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:28:40.920 [2024-12-05 12:32:11.581079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.920 [2024-12-05 12:32:11.633109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.920 [2024-12-05 12:32:11.633170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:40.920 [2024-12-05 12:32:11.633188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.920 [2024-12-05 12:32:11.633199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.920 [2024-12-05 12:32:11.633279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.920 [2024-12-05 12:32:11.633289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:40.920 [2024-12-05 12:32:11.633300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.920 [2024-12-05 12:32:11.633309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.920 [2024-12-05 12:32:11.633506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.920 [2024-12-05 12:32:11.633529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:40.920 [2024-12-05 12:32:11.633542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.920 [2024-12-05 12:32:11.633551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.920 [2024-12-05 12:32:11.633579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.921 [2024-12-05 12:32:11.633595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:40.921 [2024-12-05 12:32:11.633610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.921 [2024-12-05 12:32:11.633625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.921 [2024-12-05 12:32:11.726610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.921 [2024-12-05 12:32:11.726677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:40.921 [2024-12-05 12:32:11.726694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.921 [2024-12-05 12:32:11.726703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.182 [2024-12-05 12:32:11.803253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:41.182 [2024-12-05 12:32:11.803596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:41.182 [2024-12-05 12:32:11.803627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:41.182 [2024-12-05 12:32:11.803638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.182 [2024-12-05 12:32:11.803798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:41.182 [2024-12-05 12:32:11.803812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:41.182 [2024-12-05 12:32:11.803830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:41.182 [2024-12-05 12:32:11.803840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.182 [2024-12-05 12:32:11.803899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:41.182 [2024-12-05 12:32:11.803910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:41.182 [2024-12-05 12:32:11.803922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:41.182 [2024-12-05 12:32:11.803931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.182 [2024-12-05 12:32:11.804064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:41.182 [2024-12-05 12:32:11.804078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:41.182 [2024-12-05 12:32:11.804091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:41.182 [2024-12-05 12:32:11.804103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.182 [2024-12-05 12:32:11.804144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:41.182 [2024-12-05 12:32:11.804156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:41.183 [2024-12-05 12:32:11.804167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:41.183 [2024-12-05 12:32:11.804175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.183 [2024-12-05 12:32:11.804233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:41.183 [2024-12-05 12:32:11.804244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:41.183 [2024-12-05 12:32:11.804256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:41.183 [2024-12-05 12:32:11.804268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.183 [2024-12-05 12:32:11.804335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:41.183 [2024-12-05 12:32:11.804347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:41.183 [2024-12-05 12:32:11.804359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:41.183 [2024-12-05 12:32:11.804367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:41.183 [2024-12-05 12:32:11.804580] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 431.358 ms, result 0 00:28:41.183 true 00:28:41.183 12:32:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80939 00:28:41.183 12:32:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80939 00:28:41.183 12:32:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:41.183 [2024-12-05 12:32:11.909165] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:28:41.183 [2024-12-05 12:32:11.909318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81752 ] 00:28:41.444 [2024-12-05 12:32:12.075854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.444 [2024-12-05 12:32:12.228014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.832  [2024-12-05T12:32:14.644Z] Copying: 180/1024 [MB] (180 MBps) [2024-12-05T12:32:15.598Z] Copying: 364/1024 [MB] (183 MBps) [2024-12-05T12:32:16.540Z] Copying: 550/1024 [MB] (186 MBps) [2024-12-05T12:32:17.920Z] Copying: 733/1024 [MB] (183 MBps) [2024-12-05T12:32:17.920Z] Copying: 950/1024 [MB] (216 MBps) [2024-12-05T12:32:18.489Z] Copying: 1024/1024 [MB] (average 193 MBps) 00:28:47.620 00:28:47.620 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80939 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:47.620 12:32:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:47.879 [2024-12-05 12:32:18.532195] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:28:47.879 [2024-12-05 12:32:18.532314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81828 ] 00:28:47.879 [2024-12-05 12:32:18.689577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.137 [2024-12-05 12:32:18.781742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.396 [2024-12-05 12:32:19.018733] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.396 [2024-12-05 12:32:19.018793] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.396 [2024-12-05 12:32:19.082536] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:48.396 [2024-12-05 12:32:19.083094] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:48.396 [2024-12-05 12:32:19.083679] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:48.656 [2024-12-05 12:32:19.332319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.656 [2024-12-05 12:32:19.332355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:48.656 [2024-12-05 12:32:19.332367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:48.656 [2024-12-05 12:32:19.332375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.656 [2024-12-05 12:32:19.332414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.656 [2024-12-05 12:32:19.332423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:48.656 [2024-12-05 12:32:19.332430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:48.656 [2024-12-05 12:32:19.332436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.656 [2024-12-05 12:32:19.332450] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:48.656 [2024-12-05 12:32:19.333083] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:48.656 [2024-12-05 12:32:19.333098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.656 [2024-12-05 12:32:19.333104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:48.656 [2024-12-05 12:32:19.333111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:28:48.656 [2024-12-05 12:32:19.333117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.656 [2024-12-05 12:32:19.334457] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:48.656 [2024-12-05 12:32:19.345210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.656 [2024-12-05 12:32:19.345239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:48.656 [2024-12-05 12:32:19.345249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.753 ms 00:28:48.656 [2024-12-05 12:32:19.345256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.656 [2024-12-05 12:32:19.345303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.656 [2024-12-05 12:32:19.345311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:48.656 [2024-12-05 12:32:19.345318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:48.656 [2024-12-05 12:32:19.345324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.656 [2024-12-05 12:32:19.351830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.656 [2024-12-05 12:32:19.351856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:48.657 [2024-12-05 12:32:19.351863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.461 ms 00:28:48.657 [2024-12-05 12:32:19.351870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.657 [2024-12-05 12:32:19.351929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.657 [2024-12-05 12:32:19.351936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:48.657 [2024-12-05 12:32:19.351942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:48.657 [2024-12-05 12:32:19.351948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.657 [2024-12-05 12:32:19.351984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.657 [2024-12-05 12:32:19.351992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:48.657 [2024-12-05 12:32:19.351999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:48.657 [2024-12-05 12:32:19.352004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.657 [2024-12-05 12:32:19.352019] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:48.657 [2024-12-05 12:32:19.355108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.657 [2024-12-05 12:32:19.355133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:48.657 [2024-12-05 12:32:19.355140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.093 ms 00:28:48.657 [2024-12-05 12:32:19.355146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.657 [2024-12-05 12:32:19.355174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.657 [2024-12-05 12:32:19.355182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:48.657 [2024-12-05 12:32:19.355188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:48.657 [2024-12-05 12:32:19.355194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.657 [2024-12-05 12:32:19.355211] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:48.657 [2024-12-05 12:32:19.355229] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:48.657 [2024-12-05 12:32:19.355257] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:48.657 [2024-12-05 12:32:19.355273] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:48.657 [2024-12-05 12:32:19.355356] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:48.657 [2024-12-05 12:32:19.355365] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:48.657 [2024-12-05 12:32:19.355374] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:48.657 [2024-12-05 12:32:19.355384] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:48.657 [2024-12-05 12:32:19.355392] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:48.657 [2024-12-05 12:32:19.355398] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:48.657 [2024-12-05 12:32:19.355405] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:48.657 [2024-12-05 12:32:19.355410] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:48.657 [2024-12-05 12:32:19.355417] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:48.657 [2024-12-05 12:32:19.355423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.657 [2024-12-05 12:32:19.355429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:48.657 [2024-12-05 12:32:19.355439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:28:48.657 [2024-12-05 12:32:19.355445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.657 [2024-12-05 12:32:19.355525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.657 [2024-12-05 12:32:19.355536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:48.657 [2024-12-05 12:32:19.355542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:28:48.657 [2024-12-05 12:32:19.355547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.657 [2024-12-05 12:32:19.355633] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:48.657 [2024-12-05 12:32:19.355641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:48.657 [2024-12-05 12:32:19.355648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.657 [2024-12-05 12:32:19.355654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.657 [2024-12-05 12:32:19.355660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:48.657 [2024-12-05 12:32:19.355666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:48.657 [2024-12-05 12:32:19.355672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:48.657 [2024-12-05 12:32:19.355678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:48.657 [2024-12-05 12:32:19.355683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:48.657 [2024-12-05 12:32:19.355693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.657 [2024-12-05 12:32:19.355701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:48.657 [2024-12-05 12:32:19.355706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:48.657 [2024-12-05 12:32:19.355711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.657 [2024-12-05 12:32:19.355716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:48.657 [2024-12-05 12:32:19.355721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:48.657 [2024-12-05 12:32:19.355726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.657 [2024-12-05 12:32:19.355731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:48.657 [2024-12-05 12:32:19.355736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:48.657 [2024-12-05 12:32:19.355742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.657 [2024-12-05 12:32:19.355748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:48.657 [2024-12-05 12:32:19.355753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:48.657 [2024-12-05 12:32:19.355760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.657 [2024-12-05 12:32:19.355765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:48.657 [2024-12-05 12:32:19.355770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:48.657 [2024-12-05 12:32:19.355784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.657 [2024-12-05 12:32:19.355794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:48.657 [2024-12-05 12:32:19.355799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:48.657 [2024-12-05 12:32:19.355804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.657 [2024-12-05 12:32:19.355809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:48.657 [2024-12-05 12:32:19.355815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:48.657 [2024-12-05 12:32:19.355821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.657 [2024-12-05 12:32:19.355826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:48.657 [2024-12-05 12:32:19.355831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:48.657 [2024-12-05 12:32:19.355836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.657 [2024-12-05 12:32:19.355842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:48.657 [2024-12-05 12:32:19.355847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:48.657 [2024-12-05 12:32:19.355853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.657 [2024-12-05 12:32:19.355859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:48.657 [2024-12-05 12:32:19.355864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:48.657 [2024-12-05 12:32:19.355869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.657 [2024-12-05 12:32:19.355874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:48.657 [2024-12-05 12:32:19.355879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:48.657 [2024-12-05 12:32:19.355885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.657 [2024-12-05 12:32:19.355890] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:48.657 [2024-12-05 12:32:19.355896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:48.657 [2024-12-05 12:32:19.355904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.657 [2024-12-05 12:32:19.355922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.657 [2024-12-05 12:32:19.355928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:48.657 [2024-12-05 12:32:19.355934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:48.657 [2024-12-05 12:32:19.355939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:48.657 [2024-12-05 12:32:19.355945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:48.657 [2024-12-05 12:32:19.355950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:48.657 [2024-12-05 12:32:19.355955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:48.657 [2024-12-05 12:32:19.355963] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:48.657 [2024-12-05 12:32:19.355971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.657 [2024-12-05 12:32:19.355978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:48.657 [2024-12-05 12:32:19.355983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:48.657 [2024-12-05 12:32:19.355989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:48.657 [2024-12-05 12:32:19.355994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:48.657 [2024-12-05 12:32:19.356000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:48.657 [2024-12-05 12:32:19.356005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:48.657 [2024-12-05 12:32:19.356010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:48.658 [2024-12-05 12:32:19.356016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:48.658 [2024-12-05 12:32:19.356023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:48.658 [2024-12-05 12:32:19.356029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:48.658 [2024-12-05 12:32:19.356037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:48.658 [2024-12-05 12:32:19.356042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:48.658 [2024-12-05 12:32:19.356048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:48.658 [2024-12-05 12:32:19.356053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:48.658 [2024-12-05 12:32:19.356059] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:48.658 [2024-12-05 12:32:19.356066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.658 [2024-12-05 12:32:19.356073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:48.658 [2024-12-05 12:32:19.356078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:48.658 [2024-12-05 12:32:19.356084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:48.658 [2024-12-05 12:32:19.356090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:48.658 [2024-12-05 12:32:19.356096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.658 [2024-12-05 12:32:19.356102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:48.658 [2024-12-05 12:32:19.356109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:28:48.658 [2024-12-05 12:32:19.356115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.658 [2024-12-05 12:32:19.381089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.658 [2024-12-05 12:32:19.381120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:48.658 [2024-12-05 12:32:19.381133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.929 ms 00:28:48.658 [2024-12-05 12:32:19.381140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.658 [2024-12-05 12:32:19.381211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.658 [2024-12-05 12:32:19.381219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:48.658 [2024-12-05 12:32:19.381225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:48.658 [2024-12-05 12:32:19.381231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.658 [2024-12-05 12:32:19.427542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.658 [2024-12-05 12:32:19.427575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:48.658 [2024-12-05 12:32:19.427587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.267 ms 00:28:48.658 [2024-12-05 12:32:19.427593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.658 [2024-12-05 12:32:19.427627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.658 [2024-12-05 12:32:19.427635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:48.658 [2024-12-05 12:32:19.427643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:48.658 [2024-12-05 12:32:19.427649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.658 [2024-12-05 12:32:19.428075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.658 [2024-12-05 12:32:19.428089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:48.658 [2024-12-05 12:32:19.428097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:28:48.658 [2024-12-05 12:32:19.428111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.658 [2024-12-05 12:32:19.428223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.658 [2024-12-05 12:32:19.428231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:48.658 [2024-12-05 12:32:19.428238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:28:48.658 [2024-12-05 12:32:19.428244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.658 [2024-12-05 12:32:19.440418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.658 [2024-12-05 12:32:19.440444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:48.658 [2024-12-05 12:32:19.440453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.157 ms 00:28:48.658 [2024-12-05 12:32:19.440459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.658 [2024-12-05 12:32:19.451083] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:48.658 [2024-12-05 12:32:19.451112] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:48.658 [2024-12-05 12:32:19.451122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.658 [2024-12-05 12:32:19.451129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:48.658 [2024-12-05 12:32:19.451136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.572 ms 00:28:48.658 [2024-12-05 12:32:19.451142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.658 [2024-12-05 12:32:19.470342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.658 [2024-12-05 12:32:19.470372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:48.658 [2024-12-05 12:32:19.470382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.165 ms 00:28:48.658 [2024-12-05 12:32:19.470389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.658 [2024-12-05 12:32:19.479813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.658 [2024-12-05 12:32:19.479840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:48.658 [2024-12-05 12:32:19.479848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.388 ms 00:28:48.658 [2024-12-05 12:32:19.479853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.658 [2024-12-05 12:32:19.489174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.658 [2024-12-05 12:32:19.489200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:48.658 [2024-12-05 12:32:19.489208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.292 ms 00:28:48.658 [2024-12-05 12:32:19.489213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.658 [2024-12-05 12:32:19.489746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.658 [2024-12-05 12:32:19.489765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:48.658 [2024-12-05 12:32:19.489773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:28:48.658 [2024-12-05 12:32:19.489779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.917 [2024-12-05 12:32:19.539751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.917 [2024-12-05 12:32:19.539787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:48.917 [2024-12-05 12:32:19.539797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.957 ms 00:28:48.917 [2024-12-05 12:32:19.539804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.917 [2024-12-05 12:32:19.547987] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:48.917 [2024-12-05 12:32:19.550429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.917 [2024-12-05 12:32:19.550454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:48.917 [2024-12-05 12:32:19.550481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.583 ms 00:28:48.917 [2024-12-05 12:32:19.550492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.917 [2024-12-05 12:32:19.550569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.917 [2024-12-05 12:32:19.550577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:48.917 [2024-12-05 12:32:19.550585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:48.917 [2024-12-05 12:32:19.550592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.917 [2024-12-05 12:32:19.550650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.917 [2024-12-05 12:32:19.550660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:48.917 [2024-12-05 12:32:19.550667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:48.917 [2024-12-05 12:32:19.550673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.917 [2024-12-05 12:32:19.550693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.917 [2024-12-05 12:32:19.550700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:48.917 [2024-12-05 12:32:19.550707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:48.917 [2024-12-05 12:32:19.550713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.917 [2024-12-05 12:32:19.550742] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:48.917 [2024-12-05 12:32:19.550750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.917 [2024-12-05 12:32:19.550757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:48.917 [2024-12-05 12:32:19.550764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:48.917 [2024-12-05 12:32:19.550774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.918 [2024-12-05 12:32:19.569177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.918 [2024-12-05 12:32:19.569206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:48.918 [2024-12-05 12:32:19.569214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.388 ms 00:28:48.918 [2024-12-05 12:32:19.569224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.918 [2024-12-05 12:32:19.569287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.918 [2024-12-05 12:32:19.569295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:48.918 [2024-12-05 12:32:19.569302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:48.918 [2024-12-05 12:32:19.569312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.918 [2024-12-05 12:32:19.570638] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 237.880 ms, result 0 00:28:49.852  [2024-12-05T12:32:21.655Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-05T12:32:22.597Z] Copying: 27/1024 [MB] (13 MBps) [2024-12-05T12:32:23.982Z] Copying: 38/1024 [MB] (10 MBps) [2024-12-05T12:32:24.923Z] Copying: 49520/1048576 [kB] (10112 kBps) [2024-12-05T12:32:25.862Z] Copying: 58/1024 [MB] (10 MBps) [2024-12-05T12:32:26.798Z] Copying: 69648/1048576 [kB] (9760 kBps) [2024-12-05T12:32:27.742Z] Copying: 80/1024 [MB] (12 MBps) [2024-12-05T12:32:28.687Z] Copying: 90/1024 [MB] (10 MBps) [2024-12-05T12:32:29.627Z] Copying: 103184/1048576 [kB] (10160 kBps) [2024-12-05T12:32:31.006Z] Copying: 111/1024 [MB] (10 MBps) [2024-12-05T12:32:31.950Z] Copying: 124/1024 [MB] (13 MBps) [2024-12-05T12:32:32.938Z] Copying: 135/1024 [MB] (10 MBps) [2024-12-05T12:32:33.901Z] Copying: 146/1024 [MB] (11 MBps) [2024-12-05T12:32:34.842Z] Copying: 157/1024 [MB] (10 MBps) [2024-12-05T12:32:35.778Z] Copying: 167/1024 [MB] (10 MBps) [2024-12-05T12:32:36.721Z] Copying: 180/1024 [MB] (13 MBps) [2024-12-05T12:32:37.662Z] Copying: 190/1024 [MB] (10 MBps) [2024-12-05T12:32:38.595Z] Copying: 200/1024 [MB] (10 MBps) [2024-12-05T12:32:39.970Z] Copying: 215/1024 [MB] (14 MBps) [2024-12-05T12:32:40.911Z] Copying: 229/1024 [MB] (14 MBps) [2024-12-05T12:32:41.851Z] Copying: 240/1024 [MB] (10 MBps) [2024-12-05T12:32:42.795Z] Copying: 250/1024 [MB] (10 MBps) [2024-12-05T12:32:43.727Z] Copying: 266328/1048576 [kB] (10180 kBps) [2024-12-05T12:32:44.660Z] Copying: 273/1024 [MB] (13 MBps) [2024-12-05T12:32:45.606Z] Copying: 287/1024 [MB] (13 MBps) [2024-12-05T12:32:46.984Z] Copying: 300/1024 [MB] (12 MBps) [2024-12-05T12:32:47.920Z] Copying: 311/1024 [MB] (11 MBps) [2024-12-05T12:32:48.860Z] Copying: 329/1024 [MB] (17 MBps) [2024-12-05T12:32:49.806Z] Copying: 339/1024 [MB] (10 MBps) [2024-12-05T12:32:50.747Z] Copying: 349/1024 [MB] (10 MBps) [2024-12-05T12:32:51.689Z] Copying: 367960/1048576 [kB] (9816 kBps) [2024-12-05T12:32:52.633Z] Copying: 377916/1048576 [kB] (9956 kBps) [2024-12-05T12:32:54.016Z] Copying: 379/1024 [MB] (10 MBps) [2024-12-05T12:32:54.589Z] Copying: 389/1024 [MB] (10 MBps) [2024-12-05T12:32:55.974Z] Copying: 400/1024 [MB] (11 MBps) [2024-12-05T12:32:56.909Z] Copying: 420656/1048576 [kB] (10036 kBps) [2024-12-05T12:32:57.840Z] Copying: 424/1024 [MB] (13 MBps) [2024-12-05T12:32:58.776Z] Copying: 438/1024 [MB] (13 MBps) [2024-12-05T12:32:59.719Z] Copying: 451/1024 [MB] (13 MBps) [2024-12-05T12:33:00.656Z] Copying: 461/1024 [MB] (10 MBps) [2024-12-05T12:33:01.591Z] Copying: 473/1024 [MB] (11 MBps) [2024-12-05T12:33:02.996Z] Copying: 486/1024 [MB] (13 MBps) [2024-12-05T12:33:03.644Z] Copying: 498/1024 [MB] (11 MBps) [2024-12-05T12:33:04.609Z] Copying: 513/1024 [MB] (14 MBps) [2024-12-05T12:33:05.995Z] Copying: 524/1024 [MB] (11 MBps) [2024-12-05T12:33:06.939Z] Copying: 546960/1048576 [kB] (9664 kBps) [2024-12-05T12:33:07.881Z] Copying: 557072/1048576 [kB] (10112 kBps) [2024-12-05T12:33:08.826Z] Copying: 554/1024 [MB] (10 MBps) [2024-12-05T12:33:09.773Z] Copying: 566/1024 [MB] (11 MBps) [2024-12-05T12:33:10.721Z] Copying: 589936/1048576 [kB] (10008 kBps) [2024-12-05T12:33:11.669Z] Copying: 600160/1048576 [kB] (10224 kBps) [2024-12-05T12:33:12.614Z] Copying: 610136/1048576 [kB] (9976 kBps) [2024-12-05T12:33:14.000Z] Copying: 620104/1048576 [kB] (9968 kBps) [2024-12-05T12:33:14.935Z] Copying: 617/1024 [MB] (12 MBps) [2024-12-05T12:33:15.867Z] Copying: 636/1024 [MB] (18 MBps) [2024-12-05T12:33:16.799Z] Copying: 654/1024 [MB] (18 MBps) [2024-12-05T12:33:17.739Z] Copying: 672/1024 [MB] (18 MBps) [2024-12-05T12:33:18.684Z] Copying: 688/1024 [MB] (15 MBps) [2024-12-05T12:33:19.623Z] Copying: 698/1024 [MB] (10 MBps) [2024-12-05T12:33:20.995Z] Copying: 711/1024 [MB] (12 MBps) [2024-12-05T12:33:21.928Z] Copying: 729/1024 [MB] (18 MBps) [2024-12-05T12:33:22.860Z] Copying: 748/1024 [MB] (18 MBps) [2024-12-05T12:33:23.794Z] Copying: 765/1024 [MB] (16 MBps) [2024-12-05T12:33:24.726Z] Copying: 781/1024 [MB] (16 MBps) [2024-12-05T12:33:25.659Z] Copying: 798/1024 [MB] (16 MBps) [2024-12-05T12:33:26.590Z] Copying: 814/1024 [MB] (16 MBps) [2024-12-05T12:33:27.959Z] Copying: 830/1024 [MB] (16 MBps) [2024-12-05T12:33:28.892Z] Copying: 846/1024 [MB] (15 MBps) [2024-12-05T12:33:29.833Z] Copying: 862/1024 [MB] (15 MBps) [2024-12-05T12:33:30.777Z] Copying: 874/1024 [MB] (11 MBps) [2024-12-05T12:33:31.710Z] Copying: 904916/1048576 [kB] (9856 kBps) [2024-12-05T12:33:32.662Z] Copying: 899/1024 [MB] (15 MBps) [2024-12-05T12:33:33.656Z] Copying: 915/1024 [MB] (15 MBps) [2024-12-05T12:33:34.601Z] Copying: 929/1024 [MB] (14 MBps) [2024-12-05T12:33:35.983Z] Copying: 939/1024 [MB] (10 MBps) [2024-12-05T12:33:36.920Z] Copying: 972452/1048576 [kB] (10232 kBps) [2024-12-05T12:33:37.864Z] Copying: 982436/1048576 [kB] (9984 kBps) [2024-12-05T12:33:38.798Z] Copying: 973/1024 [MB] (14 MBps) [2024-12-05T12:33:39.731Z] Copying: 989/1024 [MB] (15 MBps) [2024-12-05T12:33:40.673Z] Copying: 1004/1024 [MB] (15 MBps) [2024-12-05T12:33:40.673Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-05 12:33:40.388772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.804 [2024-12-05 12:33:40.388831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:09.804 [2024-12-05 12:33:40.388848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:09.804 [2024-12-05 12:33:40.388858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.804 [2024-12-05 12:33:40.388888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:09.804 [2024-12-05 12:33:40.391799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.804 [2024-12-05 12:33:40.391837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:09.804 [2024-12-05 12:33:40.391848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.895 ms 00:30:09.804 [2024-12-05 12:33:40.391856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.804 [2024-12-05 12:33:40.394371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.804 [2024-12-05 12:33:40.394407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:09.804 [2024-12-05 12:33:40.394417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.489 ms 00:30:09.804 [2024-12-05 12:33:40.394425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.804 [2024-12-05 12:33:40.411921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.804 [2024-12-05 12:33:40.411968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:09.804 [2024-12-05 12:33:40.411980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.479 ms 00:30:09.804 [2024-12-05 12:33:40.411988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.804 [2024-12-05 12:33:40.418179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.804 [2024-12-05 12:33:40.418218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:09.804 [2024-12-05 12:33:40.418229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.166 ms 00:30:09.804 [2024-12-05 12:33:40.418238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.804 [2024-12-05 12:33:40.444188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.805 [2024-12-05 12:33:40.444228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:09.805 [2024-12-05 12:33:40.444241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.897 ms 00:30:09.805 [2024-12-05 12:33:40.444251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-05 12:33:40.460073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.805 [2024-12-05 12:33:40.460111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:09.805 [2024-12-05 12:33:40.460123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.784 ms 00:30:09.805 [2024-12-05 12:33:40.460131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-05 12:33:40.462838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.805 [2024-12-05 12:33:40.462876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:09.805 [2024-12-05 12:33:40.462893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.665 ms 00:30:09.805 [2024-12-05 12:33:40.462901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-05 12:33:40.487750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.805 [2024-12-05 12:33:40.487789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:09.805 [2024-12-05 12:33:40.487800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.834 ms 00:30:09.805 [2024-12-05 12:33:40.487820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-05 12:33:40.512882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.805 [2024-12-05 12:33:40.512922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:09.805 [2024-12-05 12:33:40.512933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.023 ms 00:30:09.805 [2024-12-05 12:33:40.512942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-05 12:33:40.537429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.805 [2024-12-05 12:33:40.537479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:09.805 [2024-12-05 12:33:40.537491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.446 ms 00:30:09.805 [2024-12-05 12:33:40.537499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-05 12:33:40.562134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.805 [2024-12-05 12:33:40.562180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:09.805 [2024-12-05 12:33:40.562192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.564 ms 00:30:09.805 [2024-12-05 12:33:40.562200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-05 12:33:40.562244] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:09.805 [2024-12-05 12:33:40.562261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 1024 / 261120 wr_cnt: 1 state: open 00:30:09.805 [2024-12-05 12:33:40.562273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:09.805 [2024-12-05 12:33:40.562855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.562994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:09.806 [2024-12-05 12:33:40.563139] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:09.806 [2024-12-05 12:33:40.563147] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 213de987-76bc-4387-96cd-ea9d7da49553 00:30:09.806 [2024-12-05 12:33:40.563167] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 1024 00:30:09.806 [2024-12-05 12:33:40.563178] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1984 00:30:09.806 [2024-12-05 12:33:40.563187] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1024 00:30:09.806 [2024-12-05 12:33:40.563197] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.9375 00:30:09.806 [2024-12-05 12:33:40.563204] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:09.806 [2024-12-05 12:33:40.563212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:09.806 [2024-12-05 12:33:40.563221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:09.806 [2024-12-05 12:33:40.563229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:09.806 [2024-12-05 12:33:40.563236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:09.806 [2024-12-05 12:33:40.563244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.806 [2024-12-05 12:33:40.563253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:09.806 [2024-12-05 12:33:40.563262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:30:09.806 [2024-12-05 12:33:40.563269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.806 [2024-12-05 12:33:40.577664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.806 [2024-12-05 12:33:40.577708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:09.806 [2024-12-05 12:33:40.577720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.370 ms 00:30:09.806 [2024-12-05 12:33:40.577729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.806 [2024-12-05 12:33:40.578168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.806 [2024-12-05 12:33:40.578192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:09.806 [2024-12-05 12:33:40.578203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:30:09.806 [2024-12-05 12:33:40.578221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.806 [2024-12-05 12:33:40.617698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.806 [2024-12-05 12:33:40.617748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:09.806 [2024-12-05 12:33:40.617760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.806 [2024-12-05 12:33:40.617769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.806 [2024-12-05 12:33:40.617843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.806 [2024-12-05 12:33:40.617852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:09.806 [2024-12-05 12:33:40.617861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.806 [2024-12-05 12:33:40.617877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.806 [2024-12-05 12:33:40.617957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.806 [2024-12-05 12:33:40.617968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:09.806 [2024-12-05 12:33:40.617979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.806 [2024-12-05 12:33:40.617989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.806 [2024-12-05 12:33:40.618008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.806 [2024-12-05 12:33:40.618019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:09.806 [2024-12-05 12:33:40.618027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.806 [2024-12-05 12:33:40.618036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.068 [2024-12-05 12:33:40.710762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:10.068 [2024-12-05 12:33:40.710831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:10.068 [2024-12-05 12:33:40.710847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:10.068 [2024-12-05 12:33:40.710856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.068 [2024-12-05 12:33:40.786493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:10.068 [2024-12-05 12:33:40.786557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:10.068 [2024-12-05 12:33:40.786572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:10.068 [2024-12-05 12:33:40.786583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.068 [2024-12-05 12:33:40.786718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:10.068 [2024-12-05 12:33:40.786731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:10.068 [2024-12-05 12:33:40.786742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:10.068 [2024-12-05 12:33:40.786754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.068 [2024-12-05 12:33:40.786799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:10.068 [2024-12-05 12:33:40.786811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:10.068 [2024-12-05 12:33:40.786821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:10.068 [2024-12-05 12:33:40.786832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.068 [2024-12-05 12:33:40.786944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:10.068 [2024-12-05 12:33:40.786960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:10.068 [2024-12-05 12:33:40.786971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:10.068 [2024-12-05 12:33:40.786980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.068 [2024-12-05 12:33:40.787022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:10.068 [2024-12-05 12:33:40.787036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:10.068 [2024-12-05 12:33:40.787046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:10.068 [2024-12-05 12:33:40.787055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.068 [2024-12-05 12:33:40.787107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:10.068 [2024-12-05 12:33:40.787124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:10.068 [2024-12-05 12:33:40.787133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:10.068 [2024-12-05 12:33:40.787141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.068 [2024-12-05 12:33:40.787204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:10.068 [2024-12-05 12:33:40.787219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:10.068 [2024-12-05 12:33:40.787227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:10.068 [2024-12-05 12:33:40.787238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.068 [2024-12-05 12:33:40.787407] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 398.596 ms, result 0 00:30:11.042 00:30:11.042 00:30:11.042 12:33:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:12.954 12:33:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:13.215 [2024-12-05 12:33:43.879899] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:30:13.215 [2024-12-05 12:33:43.880103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82685 ] 00:30:13.215 [2024-12-05 12:33:44.040850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.477 [2024-12-05 12:33:44.165894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.739 [2024-12-05 12:33:44.498149] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:13.739 [2024-12-05 12:33:44.498219] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:14.002 [2024-12-05 12:33:44.660147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.002 [2024-12-05 12:33:44.660196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:14.002 [2024-12-05 12:33:44.660210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:14.002 [2024-12-05 12:33:44.660220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.002 [2024-12-05 12:33:44.660269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.002 [2024-12-05 12:33:44.660281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:14.002 [2024-12-05 12:33:44.660290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:14.002 [2024-12-05 12:33:44.660298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.002 [2024-12-05 12:33:44.660319] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:14.002 [2024-12-05 12:33:44.661048] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:14.002 [2024-12-05 12:33:44.661073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.002 [2024-12-05 12:33:44.661082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:14.002 [2024-12-05 12:33:44.661090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:30:14.002 [2024-12-05 12:33:44.661098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.002 [2024-12-05 12:33:44.662657] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:14.002 [2024-12-05 12:33:44.676599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.002 [2024-12-05 12:33:44.676636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:14.002 [2024-12-05 12:33:44.676649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.943 ms 00:30:14.002 [2024-12-05 12:33:44.676656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.002 [2024-12-05 12:33:44.676722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.002 [2024-12-05 12:33:44.676743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:14.002 [2024-12-05 12:33:44.676752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:14.002 [2024-12-05 12:33:44.676760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.002 [2024-12-05 12:33:44.684687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.002 [2024-12-05 12:33:44.684718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:14.002 [2024-12-05 12:33:44.684738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.857 ms 00:30:14.002 [2024-12-05 12:33:44.684756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.002 [2024-12-05 12:33:44.684829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.002 [2024-12-05 12:33:44.684838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:14.002 [2024-12-05 12:33:44.684847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:14.002 [2024-12-05 12:33:44.684855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.002 [2024-12-05 12:33:44.684896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.002 [2024-12-05 12:33:44.684907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:14.002 [2024-12-05 12:33:44.684916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:14.002 [2024-12-05 12:33:44.684924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.002 [2024-12-05 12:33:44.684949] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:14.002 [2024-12-05 12:33:44.688941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.002 [2024-12-05 12:33:44.688973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:14.002 [2024-12-05 12:33:44.688986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.997 ms 00:30:14.002 [2024-12-05 12:33:44.688994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.002 [2024-12-05 12:33:44.689025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.002 [2024-12-05 12:33:44.689034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:14.002 [2024-12-05 12:33:44.689042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:14.002 [2024-12-05 12:33:44.689050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.002 [2024-12-05 12:33:44.689094] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:14.002 [2024-12-05 12:33:44.689116] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:14.002 [2024-12-05 12:33:44.689154] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:14.002 [2024-12-05 12:33:44.689174] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:14.002 [2024-12-05 12:33:44.689284] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:14.002 [2024-12-05 12:33:44.689297] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:14.002 [2024-12-05 12:33:44.689308] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:14.002 [2024-12-05 12:33:44.689319] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:14.002 [2024-12-05 12:33:44.689328] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:14.002 [2024-12-05 12:33:44.689338] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:14.002 [2024-12-05 12:33:44.689346] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:14.002 [2024-12-05 12:33:44.689357] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:14.002 [2024-12-05 12:33:44.689366] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:14.002 [2024-12-05 12:33:44.689374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.002 [2024-12-05 12:33:44.689383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:14.002 [2024-12-05 12:33:44.689392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:30:14.003 [2024-12-05 12:33:44.689399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.003 [2024-12-05 12:33:44.689506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.003 [2024-12-05 12:33:44.689516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:14.003 [2024-12-05 12:33:44.689524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:30:14.003 [2024-12-05 12:33:44.689532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.003 [2024-12-05 12:33:44.689646] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:14.003 [2024-12-05 12:33:44.689665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:14.003 [2024-12-05 12:33:44.689676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:14.003 [2024-12-05 12:33:44.689684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:14.003 [2024-12-05 12:33:44.689692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:14.003 [2024-12-05 12:33:44.689699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:14.003 [2024-12-05 12:33:44.689706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:14.003 [2024-12-05 12:33:44.689714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:14.003 [2024-12-05 12:33:44.689722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:14.003 [2024-12-05 12:33:44.689730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:14.003 [2024-12-05 12:33:44.689737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:14.003 [2024-12-05 12:33:44.689745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:14.003 [2024-12-05 12:33:44.689752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:14.003 [2024-12-05 12:33:44.689766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:14.003 [2024-12-05 12:33:44.689776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:14.003 [2024-12-05 12:33:44.689784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:14.003 [2024-12-05 12:33:44.689792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:14.003 [2024-12-05 12:33:44.689799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:14.003 [2024-12-05 12:33:44.689806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:14.003 [2024-12-05 12:33:44.689813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:14.003 [2024-12-05 12:33:44.689821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:14.003 [2024-12-05 12:33:44.689828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:14.003 [2024-12-05 12:33:44.689835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:14.003 [2024-12-05 12:33:44.689842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:14.003 [2024-12-05 12:33:44.689848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:14.003 [2024-12-05 12:33:44.689854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:14.003 [2024-12-05 12:33:44.689861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:14.003 [2024-12-05 12:33:44.689867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:14.003 [2024-12-05 12:33:44.689874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:14.003 [2024-12-05 12:33:44.689881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:14.003 [2024-12-05 12:33:44.689888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:14.003 [2024-12-05 12:33:44.689894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:14.003 [2024-12-05 12:33:44.689900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:14.003 [2024-12-05 12:33:44.689906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:14.003 [2024-12-05 12:33:44.689913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:14.003 [2024-12-05 12:33:44.689919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:14.003 [2024-12-05 12:33:44.689927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:14.003 [2024-12-05 12:33:44.689935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:14.003 [2024-12-05 12:33:44.689941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:14.003 [2024-12-05 12:33:44.689947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:14.003 [2024-12-05 12:33:44.689954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:14.003 [2024-12-05 12:33:44.689960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:14.003 [2024-12-05 12:33:44.689968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:14.003 [2024-12-05 12:33:44.689974] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:14.003 [2024-12-05 12:33:44.689982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:14.003 [2024-12-05 12:33:44.689990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:14.003 [2024-12-05 12:33:44.689999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:14.003 [2024-12-05 12:33:44.690009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:14.003 [2024-12-05 12:33:44.690016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:14.003 [2024-12-05 12:33:44.690023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:14.003 [2024-12-05 12:33:44.690031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:14.003 [2024-12-05 12:33:44.690040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:14.003 [2024-12-05 12:33:44.690047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:14.003 [2024-12-05 12:33:44.690056] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:14.003 [2024-12-05 12:33:44.690065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:14.003 [2024-12-05 12:33:44.690077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:14.003 [2024-12-05 12:33:44.690084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:14.003 [2024-12-05 12:33:44.690091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:14.003 [2024-12-05 12:33:44.690099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:14.003 [2024-12-05 12:33:44.690106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:14.003 [2024-12-05 12:33:44.690113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:14.003 [2024-12-05 12:33:44.690120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:14.003 [2024-12-05 12:33:44.690127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:14.003 [2024-12-05 12:33:44.690135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:14.003 [2024-12-05 12:33:44.690142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:14.003 [2024-12-05 12:33:44.690149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:14.003 [2024-12-05 12:33:44.690156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:14.003 [2024-12-05 12:33:44.690164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:14.003 [2024-12-05 12:33:44.690172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:14.003 [2024-12-05 12:33:44.690179] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:14.003 [2024-12-05 12:33:44.690187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:14.003 [2024-12-05 12:33:44.690196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:14.003 [2024-12-05 12:33:44.690204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:14.003 [2024-12-05 12:33:44.690213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:14.003 [2024-12-05 12:33:44.690221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:14.003 [2024-12-05 12:33:44.690229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.003 [2024-12-05 12:33:44.690236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:14.003 [2024-12-05 12:33:44.690243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:30:14.003 [2024-12-05 12:33:44.690250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.003 [2024-12-05 12:33:44.722223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.003 [2024-12-05 12:33:44.722262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:14.003 [2024-12-05 12:33:44.722274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.925 ms 00:30:14.003 [2024-12-05 12:33:44.722287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.003 [2024-12-05 12:33:44.722369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.003 [2024-12-05 12:33:44.722378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:14.003 [2024-12-05 12:33:44.722387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:14.003 [2024-12-05 12:33:44.722396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.003 [2024-12-05 12:33:44.769264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.003 [2024-12-05 12:33:44.769303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:14.003 [2024-12-05 12:33:44.769316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.814 ms 00:30:14.003 [2024-12-05 12:33:44.769325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.003 [2024-12-05 12:33:44.769368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.003 [2024-12-05 12:33:44.769378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:14.003 [2024-12-05 12:33:44.769390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:14.003 [2024-12-05 12:33:44.769398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.003 [2024-12-05 12:33:44.769901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.003 [2024-12-05 12:33:44.769927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:14.004 [2024-12-05 12:33:44.769938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:30:14.004 [2024-12-05 12:33:44.769946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.004 [2024-12-05 12:33:44.770091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.004 [2024-12-05 12:33:44.770102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:14.004 [2024-12-05 12:33:44.770117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:30:14.004 [2024-12-05 12:33:44.770124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.004 [2024-12-05 12:33:44.785105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.004 [2024-12-05 12:33:44.785143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:14.004 [2024-12-05 12:33:44.785153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.962 ms 00:30:14.004 [2024-12-05 12:33:44.785161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.004 [2024-12-05 12:33:44.798929] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:30:14.004 [2024-12-05 12:33:44.798965] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:14.004 [2024-12-05 12:33:44.798978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.004 [2024-12-05 12:33:44.798987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:14.004 [2024-12-05 12:33:44.798996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.721 ms 00:30:14.004 [2024-12-05 12:33:44.799005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.004 [2024-12-05 12:33:44.823857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.004 [2024-12-05 12:33:44.823893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:14.004 [2024-12-05 12:33:44.823904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.810 ms 00:30:14.004 [2024-12-05 12:33:44.823912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.004 [2024-12-05 12:33:44.836272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.004 [2024-12-05 12:33:44.836307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:14.004 [2024-12-05 12:33:44.836317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.318 ms 00:30:14.004 [2024-12-05 12:33:44.836325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.004 [2024-12-05 12:33:44.848101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.004 [2024-12-05 12:33:44.848137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:14.004 [2024-12-05 12:33:44.848147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.741 ms 00:30:14.004 [2024-12-05 12:33:44.848155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.004 [2024-12-05 12:33:44.848820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.004 [2024-12-05 12:33:44.848877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:14.004 [2024-12-05 12:33:44.848892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:30:14.004 [2024-12-05 12:33:44.848900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.265 [2024-12-05 12:33:44.910665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.265 [2024-12-05 12:33:44.910719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:14.265 [2024-12-05 12:33:44.910740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.744 ms 00:30:14.265 [2024-12-05 12:33:44.910749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.265 [2024-12-05 12:33:44.921594] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:14.265 [2024-12-05 12:33:44.924203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.265 [2024-12-05 12:33:44.924233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:14.265 [2024-12-05 12:33:44.924246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.403 ms 00:30:14.265 [2024-12-05 12:33:44.924254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.265 [2024-12-05 12:33:44.924329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.265 [2024-12-05 12:33:44.924341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:14.265 [2024-12-05 12:33:44.924355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:14.265 [2024-12-05 12:33:44.924363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.265 [2024-12-05 12:33:44.925086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.265 [2024-12-05 12:33:44.925118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:14.265 [2024-12-05 12:33:44.925130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:30:14.265 [2024-12-05 12:33:44.925137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.265 [2024-12-05 12:33:44.925161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.265 [2024-12-05 12:33:44.925170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:14.265 [2024-12-05 12:33:44.925179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:14.265 [2024-12-05 12:33:44.925186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.265 [2024-12-05 12:33:44.925227] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:14.265 [2024-12-05 12:33:44.925238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.265 [2024-12-05 12:33:44.925247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:14.265 [2024-12-05 12:33:44.925255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:14.265 [2024-12-05 12:33:44.925263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.265 [2024-12-05 12:33:44.949116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.265 [2024-12-05 12:33:44.949151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:14.265 [2024-12-05 12:33:44.949167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.834 ms 00:30:14.265 [2024-12-05 12:33:44.949176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.265 [2024-12-05 12:33:44.949251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:14.265 [2024-12-05 12:33:44.949263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:14.265 [2024-12-05 12:33:44.949272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:14.265 [2024-12-05 12:33:44.949280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:14.265 [2024-12-05 12:33:44.950583] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 289.956 ms, result 0 00:30:15.648  [2024-12-05T12:33:47.462Z] Copying: 944/1048576 [kB] (944 kBps) [2024-12-05T12:33:48.409Z] Copying: 1916/1048576 [kB] (972 kBps) [2024-12-05T12:33:49.354Z] Copying: 4752/1048576 [kB] (2836 kBps) [2024-12-05T12:33:50.298Z] Copying: 24/1024 [MB] (19 MBps) [2024-12-05T12:33:51.242Z] Copying: 47/1024 [MB] (23 MBps) [2024-12-05T12:33:52.188Z] Copying: 65/1024 [MB] (17 MBps) [2024-12-05T12:33:53.578Z] Copying: 85/1024 [MB] (20 MBps) [2024-12-05T12:33:54.152Z] Copying: 107/1024 [MB] (21 MBps) [2024-12-05T12:33:55.538Z] Copying: 124/1024 [MB] (16 MBps) [2024-12-05T12:33:56.483Z] Copying: 139/1024 [MB] (15 MBps) [2024-12-05T12:33:57.452Z] Copying: 160/1024 [MB] (20 MBps) [2024-12-05T12:33:58.398Z] Copying: 175/1024 [MB] (15 MBps) [2024-12-05T12:33:59.343Z] Copying: 191/1024 [MB] (16 MBps) [2024-12-05T12:34:00.280Z] Copying: 207/1024 [MB] (16 MBps) [2024-12-05T12:34:01.212Z] Copying: 225/1024 [MB] (17 MBps) [2024-12-05T12:34:02.143Z] Copying: 246/1024 [MB] (21 MBps) [2024-12-05T12:34:03.527Z] Copying: 267/1024 [MB] (21 MBps) [2024-12-05T12:34:04.472Z] Copying: 283/1024 [MB] (16 MBps) [2024-12-05T12:34:05.416Z] Copying: 300/1024 [MB] (16 MBps) [2024-12-05T12:34:06.360Z] Copying: 325/1024 [MB] (25 MBps) [2024-12-05T12:34:07.303Z] Copying: 340/1024 [MB] (15 MBps) [2024-12-05T12:34:08.245Z] Copying: 356/1024 [MB] (15 MBps) [2024-12-05T12:34:09.188Z] Copying: 371/1024 [MB] (15 MBps) [2024-12-05T12:34:10.574Z] Copying: 386/1024 [MB] (14 MBps) [2024-12-05T12:34:11.147Z] Copying: 401/1024 [MB] (14 MBps) [2024-12-05T12:34:12.531Z] Copying: 416/1024 [MB] (15 MBps) [2024-12-05T12:34:13.464Z] Copying: 431/1024 [MB] (15 MBps) [2024-12-05T12:34:14.396Z] Copying: 451/1024 [MB] (19 MBps) [2024-12-05T12:34:15.335Z] Copying: 474/1024 [MB] (23 MBps) [2024-12-05T12:34:16.278Z] Copying: 493/1024 [MB] (19 MBps) [2024-12-05T12:34:17.254Z] Copying: 509/1024 [MB] (15 MBps) [2024-12-05T12:34:18.199Z] Copying: 524/1024 [MB] (15 MBps) [2024-12-05T12:34:19.145Z] Copying: 542/1024 [MB] (17 MBps) [2024-12-05T12:34:20.534Z] Copying: 557/1024 [MB] (15 MBps) [2024-12-05T12:34:21.481Z] Copying: 574/1024 [MB] (16 MBps) [2024-12-05T12:34:22.423Z] Copying: 589/1024 [MB] (14 MBps) [2024-12-05T12:34:23.358Z] Copying: 604/1024 [MB] (15 MBps) [2024-12-05T12:34:24.288Z] Copying: 621/1024 [MB] (16 MBps) [2024-12-05T12:34:25.223Z] Copying: 640/1024 [MB] (19 MBps) [2024-12-05T12:34:26.179Z] Copying: 662/1024 [MB] (21 MBps) [2024-12-05T12:34:27.554Z] Copying: 684/1024 [MB] (22 MBps) [2024-12-05T12:34:28.488Z] Copying: 704/1024 [MB] (20 MBps) [2024-12-05T12:34:29.422Z] Copying: 724/1024 [MB] (19 MBps) [2024-12-05T12:34:30.450Z] Copying: 751/1024 [MB] (27 MBps) [2024-12-05T12:34:31.390Z] Copying: 772/1024 [MB] (21 MBps) [2024-12-05T12:34:32.326Z] Copying: 788/1024 [MB] (15 MBps) [2024-12-05T12:34:33.261Z] Copying: 814/1024 [MB] (26 MBps) [2024-12-05T12:34:34.194Z] Copying: 835/1024 [MB] (20 MBps) [2024-12-05T12:34:35.579Z] Copying: 857/1024 [MB] (21 MBps) [2024-12-05T12:34:36.150Z] Copying: 875/1024 [MB] (18 MBps) [2024-12-05T12:34:37.529Z] Copying: 891/1024 [MB] (15 MBps) [2024-12-05T12:34:38.462Z] Copying: 912/1024 [MB] (21 MBps) [2024-12-05T12:34:39.397Z] Copying: 936/1024 [MB] (24 MBps) [2024-12-05T12:34:40.337Z] Copying: 960/1024 [MB] (23 MBps) [2024-12-05T12:34:41.281Z] Copying: 977/1024 [MB] (17 MBps) [2024-12-05T12:34:42.225Z] Copying: 992/1024 [MB] (14 MBps) [2024-12-05T12:34:42.791Z] Copying: 1007/1024 [MB] (14 MBps) [2024-12-05T12:34:42.792Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-05 12:34:42.772063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.923 [2024-12-05 12:34:42.772126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:11.923 [2024-12-05 12:34:42.772142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:11.923 [2024-12-05 12:34:42.772152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.923 [2024-12-05 12:34:42.772176] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:11.923 [2024-12-05 12:34:42.775675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.923 [2024-12-05 12:34:42.775778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:11.923 [2024-12-05 12:34:42.775838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.482 ms 00:31:11.923 [2024-12-05 12:34:42.775864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.923 [2024-12-05 12:34:42.776143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.923 [2024-12-05 12:34:42.776177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:11.923 [2024-12-05 12:34:42.776199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:31:11.923 [2024-12-05 12:34:42.776252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.181 [2024-12-05 12:34:42.795017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.181 [2024-12-05 12:34:42.795119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:12.181 [2024-12-05 12:34:42.795167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.732 ms 00:31:12.181 [2024-12-05 12:34:42.795185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.181 [2024-12-05 12:34:42.800001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.181 [2024-12-05 12:34:42.800088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:12.181 [2024-12-05 12:34:42.800142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.784 ms 00:31:12.181 [2024-12-05 12:34:42.800161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.181 [2024-12-05 12:34:42.819836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.181 [2024-12-05 12:34:42.819939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:12.181 [2024-12-05 12:34:42.819985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.633 ms 00:31:12.181 [2024-12-05 12:34:42.820003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.181 [2024-12-05 12:34:42.831819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.181 [2024-12-05 12:34:42.831914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:12.181 [2024-12-05 12:34:42.831956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.783 ms 00:31:12.182 [2024-12-05 12:34:42.831973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.182 [2024-12-05 12:34:42.836134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.182 [2024-12-05 12:34:42.836214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:12.182 [2024-12-05 12:34:42.836254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.126 ms 00:31:12.182 [2024-12-05 12:34:42.836276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.182 [2024-12-05 12:34:42.855111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.182 [2024-12-05 12:34:42.855192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:12.182 [2024-12-05 12:34:42.855231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.812 ms 00:31:12.182 [2024-12-05 12:34:42.855248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.182 [2024-12-05 12:34:42.873658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.182 [2024-12-05 12:34:42.873742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:12.182 [2024-12-05 12:34:42.873780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.379 ms 00:31:12.182 [2024-12-05 12:34:42.873797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.182 [2024-12-05 12:34:42.891493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.182 [2024-12-05 12:34:42.891579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:12.182 [2024-12-05 12:34:42.891617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.666 ms 00:31:12.182 [2024-12-05 12:34:42.891633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.182 [2024-12-05 12:34:42.909754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.182 [2024-12-05 12:34:42.909846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:12.182 [2024-12-05 12:34:42.909858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.071 ms 00:31:12.182 [2024-12-05 12:34:42.909864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.182 [2024-12-05 12:34:42.909886] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:12.182 [2024-12-05 12:34:42.909899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:12.182 [2024-12-05 12:34:42.909908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1792 / 261120 wr_cnt: 1 state: open 00:31:12.182 [2024-12-05 12:34:42.909915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.909921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.909927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.909934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.909940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.909946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.909952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.909958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.909964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.909971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.909977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.909983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.909989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.909995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:12.182 [2024-12-05 12:34:42.910251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:12.183 [2024-12-05 12:34:42.910538] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:12.183 [2024-12-05 12:34:42.910545] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 213de987-76bc-4387-96cd-ea9d7da49553 00:31:12.183 [2024-12-05 12:34:42.910551] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262912 00:31:12.183 [2024-12-05 12:34:42.910558] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263872 00:31:12.183 [2024-12-05 12:34:42.910566] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261888 00:31:12.183 [2024-12-05 12:34:42.910573] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:31:12.183 [2024-12-05 12:34:42.910580] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:12.183 [2024-12-05 12:34:42.910592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:12.183 [2024-12-05 12:34:42.910599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:12.183 [2024-12-05 12:34:42.910606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:12.183 [2024-12-05 12:34:42.910612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:12.183 [2024-12-05 12:34:42.910617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.183 [2024-12-05 12:34:42.910623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:12.183 [2024-12-05 12:34:42.910631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:31:12.183 [2024-12-05 12:34:42.910638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.183 [2024-12-05 12:34:42.921030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.183 [2024-12-05 12:34:42.921054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:12.183 [2024-12-05 12:34:42.921063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.378 ms 00:31:12.183 [2024-12-05 12:34:42.921069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.183 [2024-12-05 12:34:42.921368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.183 [2024-12-05 12:34:42.921376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:12.183 [2024-12-05 12:34:42.921383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:31:12.183 [2024-12-05 12:34:42.921390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.183 [2024-12-05 12:34:42.949143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.183 [2024-12-05 12:34:42.949169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:12.183 [2024-12-05 12:34:42.949177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.183 [2024-12-05 12:34:42.949183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.183 [2024-12-05 12:34:42.949225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.183 [2024-12-05 12:34:42.949232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:12.183 [2024-12-05 12:34:42.949238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.183 [2024-12-05 12:34:42.949245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.183 [2024-12-05 12:34:42.949292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.183 [2024-12-05 12:34:42.949299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:12.183 [2024-12-05 12:34:42.949306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.183 [2024-12-05 12:34:42.949312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.183 [2024-12-05 12:34:42.949324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.183 [2024-12-05 12:34:42.949330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:12.184 [2024-12-05 12:34:42.949338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.184 [2024-12-05 12:34:42.949343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.184 [2024-12-05 12:34:43.012544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.184 [2024-12-05 12:34:43.012585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:12.184 [2024-12-05 12:34:43.012595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.184 [2024-12-05 12:34:43.012601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.442 [2024-12-05 12:34:43.064751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.442 [2024-12-05 12:34:43.064786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:12.442 [2024-12-05 12:34:43.064796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.442 [2024-12-05 12:34:43.064802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.442 [2024-12-05 12:34:43.064871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.442 [2024-12-05 12:34:43.064884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:12.442 [2024-12-05 12:34:43.064890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.442 [2024-12-05 12:34:43.064897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.442 [2024-12-05 12:34:43.064927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.442 [2024-12-05 12:34:43.064934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:12.442 [2024-12-05 12:34:43.064940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.442 [2024-12-05 12:34:43.064947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.442 [2024-12-05 12:34:43.065023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.442 [2024-12-05 12:34:43.065032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:12.442 [2024-12-05 12:34:43.065040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.442 [2024-12-05 12:34:43.065047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.442 [2024-12-05 12:34:43.065071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.442 [2024-12-05 12:34:43.065078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:12.442 [2024-12-05 12:34:43.065085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.442 [2024-12-05 12:34:43.065091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.442 [2024-12-05 12:34:43.065126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.442 [2024-12-05 12:34:43.065134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:12.442 [2024-12-05 12:34:43.065143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.442 [2024-12-05 12:34:43.065149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.442 [2024-12-05 12:34:43.065189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.442 [2024-12-05 12:34:43.065197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:12.442 [2024-12-05 12:34:43.065203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.442 [2024-12-05 12:34:43.065210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.442 [2024-12-05 12:34:43.065317] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 293.238 ms, result 0 00:31:13.029 00:31:13.029 00:31:13.029 12:34:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:14.938 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:14.938 12:34:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:14.938 [2024-12-05 12:34:45.382094] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:31:14.938 [2024-12-05 12:34:45.382202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83308 ] 00:31:14.938 [2024-12-05 12:34:45.535162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.938 [2024-12-05 12:34:45.628986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.196 [2024-12-05 12:34:45.866136] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:15.196 [2024-12-05 12:34:45.866196] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:15.196 [2024-12-05 12:34:46.021447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.196 [2024-12-05 12:34:46.021497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:15.196 [2024-12-05 12:34:46.021509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:15.196 [2024-12-05 12:34:46.021516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.196 [2024-12-05 12:34:46.021555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.196 [2024-12-05 12:34:46.021565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:15.196 [2024-12-05 12:34:46.021572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:15.196 [2024-12-05 12:34:46.021578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.196 [2024-12-05 12:34:46.021592] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:15.196 [2024-12-05 12:34:46.022157] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:15.196 [2024-12-05 12:34:46.022175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.196 [2024-12-05 12:34:46.022182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:15.196 [2024-12-05 12:34:46.022190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:31:15.196 [2024-12-05 12:34:46.022197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.196 [2024-12-05 12:34:46.023477] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:15.196 [2024-12-05 12:34:46.034440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.196 [2024-12-05 12:34:46.034482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:15.196 [2024-12-05 12:34:46.034492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.980 ms 00:31:15.196 [2024-12-05 12:34:46.034498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.197 [2024-12-05 12:34:46.034547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.197 [2024-12-05 12:34:46.034555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:15.197 [2024-12-05 12:34:46.034562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:15.197 [2024-12-05 12:34:46.034568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.197 [2024-12-05 12:34:46.040994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.197 [2024-12-05 12:34:46.041019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:15.197 [2024-12-05 12:34:46.041027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.383 ms 00:31:15.197 [2024-12-05 12:34:46.041037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.197 [2024-12-05 12:34:46.041098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.197 [2024-12-05 12:34:46.041105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:15.197 [2024-12-05 12:34:46.041112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:15.197 [2024-12-05 12:34:46.041118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.197 [2024-12-05 12:34:46.041158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.197 [2024-12-05 12:34:46.041166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:15.197 [2024-12-05 12:34:46.041173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:15.197 [2024-12-05 12:34:46.041179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.197 [2024-12-05 12:34:46.041199] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:15.197 [2024-12-05 12:34:46.044119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.197 [2024-12-05 12:34:46.044142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:15.197 [2024-12-05 12:34:46.044153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.925 ms 00:31:15.197 [2024-12-05 12:34:46.044159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.197 [2024-12-05 12:34:46.044185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.197 [2024-12-05 12:34:46.044192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:15.197 [2024-12-05 12:34:46.044198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:15.197 [2024-12-05 12:34:46.044205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.197 [2024-12-05 12:34:46.044220] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:15.197 [2024-12-05 12:34:46.044236] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:15.197 [2024-12-05 12:34:46.044267] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:15.197 [2024-12-05 12:34:46.044283] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:15.197 [2024-12-05 12:34:46.044368] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:15.197 [2024-12-05 12:34:46.044377] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:15.197 [2024-12-05 12:34:46.044386] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:15.197 [2024-12-05 12:34:46.044394] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:15.197 [2024-12-05 12:34:46.044402] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:15.197 [2024-12-05 12:34:46.044409] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:15.197 [2024-12-05 12:34:46.044416] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:15.197 [2024-12-05 12:34:46.044424] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:15.197 [2024-12-05 12:34:46.044430] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:15.197 [2024-12-05 12:34:46.044436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.197 [2024-12-05 12:34:46.044442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:15.197 [2024-12-05 12:34:46.044448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:31:15.197 [2024-12-05 12:34:46.044454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.197 [2024-12-05 12:34:46.044528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.197 [2024-12-05 12:34:46.044536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:15.197 [2024-12-05 12:34:46.044542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:15.197 [2024-12-05 12:34:46.044548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.197 [2024-12-05 12:34:46.044629] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:15.197 [2024-12-05 12:34:46.044642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:15.197 [2024-12-05 12:34:46.044650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:15.197 [2024-12-05 12:34:46.044657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.197 [2024-12-05 12:34:46.044663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:15.197 [2024-12-05 12:34:46.044668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:15.197 [2024-12-05 12:34:46.044674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:15.197 [2024-12-05 12:34:46.044679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:15.197 [2024-12-05 12:34:46.044685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:15.197 [2024-12-05 12:34:46.044699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:15.197 [2024-12-05 12:34:46.044706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:15.197 [2024-12-05 12:34:46.044711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:15.197 [2024-12-05 12:34:46.044716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:15.197 [2024-12-05 12:34:46.044726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:15.197 [2024-12-05 12:34:46.044732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:15.197 [2024-12-05 12:34:46.044737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.197 [2024-12-05 12:34:46.044743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:15.197 [2024-12-05 12:34:46.044748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:15.197 [2024-12-05 12:34:46.044754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.197 [2024-12-05 12:34:46.044760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:15.197 [2024-12-05 12:34:46.044766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:15.197 [2024-12-05 12:34:46.044771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.197 [2024-12-05 12:34:46.044777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:15.197 [2024-12-05 12:34:46.044782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:15.197 [2024-12-05 12:34:46.044787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.197 [2024-12-05 12:34:46.044792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:15.197 [2024-12-05 12:34:46.044797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:15.197 [2024-12-05 12:34:46.044802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.197 [2024-12-05 12:34:46.044808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:15.197 [2024-12-05 12:34:46.044813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:15.197 [2024-12-05 12:34:46.044818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.197 [2024-12-05 12:34:46.044824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:15.197 [2024-12-05 12:34:46.044829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:15.197 [2024-12-05 12:34:46.044834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:15.197 [2024-12-05 12:34:46.044840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:15.197 [2024-12-05 12:34:46.044845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:15.197 [2024-12-05 12:34:46.044850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:15.197 [2024-12-05 12:34:46.044855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:15.197 [2024-12-05 12:34:46.044860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:15.197 [2024-12-05 12:34:46.044865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.197 [2024-12-05 12:34:46.044870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:15.197 [2024-12-05 12:34:46.044877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:15.197 [2024-12-05 12:34:46.044883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.197 [2024-12-05 12:34:46.044888] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:15.197 [2024-12-05 12:34:46.044895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:15.197 [2024-12-05 12:34:46.044901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:15.197 [2024-12-05 12:34:46.044907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.197 [2024-12-05 12:34:46.044913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:15.197 [2024-12-05 12:34:46.044918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:15.197 [2024-12-05 12:34:46.044924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:15.197 [2024-12-05 12:34:46.044929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:15.197 [2024-12-05 12:34:46.044934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:15.197 [2024-12-05 12:34:46.044939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:15.197 [2024-12-05 12:34:46.044945] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:15.197 [2024-12-05 12:34:46.044953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:15.197 [2024-12-05 12:34:46.044964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:15.198 [2024-12-05 12:34:46.044970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:15.198 [2024-12-05 12:34:46.044976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:15.198 [2024-12-05 12:34:46.044981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:15.198 [2024-12-05 12:34:46.044986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:15.198 [2024-12-05 12:34:46.044991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:15.198 [2024-12-05 12:34:46.044996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:15.198 [2024-12-05 12:34:46.045002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:15.198 [2024-12-05 12:34:46.045007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:15.198 [2024-12-05 12:34:46.045013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:15.198 [2024-12-05 12:34:46.045018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:15.198 [2024-12-05 12:34:46.045023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:15.198 [2024-12-05 12:34:46.045029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:15.198 [2024-12-05 12:34:46.045034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:15.198 [2024-12-05 12:34:46.045039] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:15.198 [2024-12-05 12:34:46.045046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:15.198 [2024-12-05 12:34:46.045053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:15.198 [2024-12-05 12:34:46.045059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:15.198 [2024-12-05 12:34:46.045065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:15.198 [2024-12-05 12:34:46.045071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:15.198 [2024-12-05 12:34:46.045077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.198 [2024-12-05 12:34:46.045083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:15.198 [2024-12-05 12:34:46.045089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:31:15.198 [2024-12-05 12:34:46.045094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.456 [2024-12-05 12:34:46.069754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.456 [2024-12-05 12:34:46.069783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:15.456 [2024-12-05 12:34:46.069792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.616 ms 00:31:15.456 [2024-12-05 12:34:46.069802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.456 [2024-12-05 12:34:46.069864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.456 [2024-12-05 12:34:46.069871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:15.456 [2024-12-05 12:34:46.069877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:15.456 [2024-12-05 12:34:46.069883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.456 [2024-12-05 12:34:46.108615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.456 [2024-12-05 12:34:46.108648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:15.456 [2024-12-05 12:34:46.108658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.690 ms 00:31:15.456 [2024-12-05 12:34:46.108665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.456 [2024-12-05 12:34:46.108714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.456 [2024-12-05 12:34:46.108723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:15.456 [2024-12-05 12:34:46.108734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:15.456 [2024-12-05 12:34:46.108740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.456 [2024-12-05 12:34:46.109150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.456 [2024-12-05 12:34:46.109172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:15.456 [2024-12-05 12:34:46.109180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:31:15.456 [2024-12-05 12:34:46.109186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.456 [2024-12-05 12:34:46.109303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.456 [2024-12-05 12:34:46.109311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:15.456 [2024-12-05 12:34:46.109319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:31:15.456 [2024-12-05 12:34:46.109336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.456 [2024-12-05 12:34:46.121408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.456 [2024-12-05 12:34:46.121435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:15.456 [2024-12-05 12:34:46.121445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.055 ms 00:31:15.456 [2024-12-05 12:34:46.121451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.456 [2024-12-05 12:34:46.132321] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:15.456 [2024-12-05 12:34:46.132360] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:15.456 [2024-12-05 12:34:46.132370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.456 [2024-12-05 12:34:46.132377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:15.456 [2024-12-05 12:34:46.132384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.817 ms 00:31:15.456 [2024-12-05 12:34:46.132390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.456 [2024-12-05 12:34:46.151666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.456 [2024-12-05 12:34:46.151693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:15.456 [2024-12-05 12:34:46.151703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.242 ms 00:31:15.456 [2024-12-05 12:34:46.151710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.456 [2024-12-05 12:34:46.161521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.456 [2024-12-05 12:34:46.161547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:15.457 [2024-12-05 12:34:46.161555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.771 ms 00:31:15.457 [2024-12-05 12:34:46.161561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.457 [2024-12-05 12:34:46.170958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.457 [2024-12-05 12:34:46.170984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:15.457 [2024-12-05 12:34:46.170993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.371 ms 00:31:15.457 [2024-12-05 12:34:46.170999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.457 [2024-12-05 12:34:46.171487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.457 [2024-12-05 12:34:46.171504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:15.457 [2024-12-05 12:34:46.171515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:31:15.457 [2024-12-05 12:34:46.171522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.457 [2024-12-05 12:34:46.221410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.457 [2024-12-05 12:34:46.221447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:15.457 [2024-12-05 12:34:46.221470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.874 ms 00:31:15.457 [2024-12-05 12:34:46.221477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.457 [2024-12-05 12:34:46.229953] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:15.457 [2024-12-05 12:34:46.232240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.457 [2024-12-05 12:34:46.232265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:15.457 [2024-12-05 12:34:46.232275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.726 ms 00:31:15.457 [2024-12-05 12:34:46.232283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.457 [2024-12-05 12:34:46.232363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.457 [2024-12-05 12:34:46.232371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:15.457 [2024-12-05 12:34:46.232381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:15.457 [2024-12-05 12:34:46.232388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.457 [2024-12-05 12:34:46.233056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.457 [2024-12-05 12:34:46.233083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:15.457 [2024-12-05 12:34:46.233091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.637 ms 00:31:15.457 [2024-12-05 12:34:46.233104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.457 [2024-12-05 12:34:46.233124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.457 [2024-12-05 12:34:46.233132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:15.457 [2024-12-05 12:34:46.233138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:15.457 [2024-12-05 12:34:46.233144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.457 [2024-12-05 12:34:46.233177] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:15.457 [2024-12-05 12:34:46.233186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.457 [2024-12-05 12:34:46.233193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:15.457 [2024-12-05 12:34:46.233200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:15.457 [2024-12-05 12:34:46.233206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.457 [2024-12-05 12:34:46.252562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.457 [2024-12-05 12:34:46.252591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:15.457 [2024-12-05 12:34:46.252603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.341 ms 00:31:15.457 [2024-12-05 12:34:46.252611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.457 [2024-12-05 12:34:46.252670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.457 [2024-12-05 12:34:46.252678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:15.457 [2024-12-05 12:34:46.252685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:15.457 [2024-12-05 12:34:46.252707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.457 [2024-12-05 12:34:46.253603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 231.761 ms, result 0 00:31:16.831  [2024-12-05T12:34:48.638Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-05T12:34:49.579Z] Copying: 26/1024 [MB] (10 MBps) [2024-12-05T12:34:50.521Z] Copying: 44/1024 [MB] (18 MBps) [2024-12-05T12:34:51.462Z] Copying: 56/1024 [MB] (11 MBps) [2024-12-05T12:34:52.398Z] Copying: 67/1024 [MB] (11 MBps) [2024-12-05T12:34:53.784Z] Copying: 81/1024 [MB] (13 MBps) [2024-12-05T12:34:54.725Z] Copying: 98/1024 [MB] (17 MBps) [2024-12-05T12:34:55.664Z] Copying: 109/1024 [MB] (10 MBps) [2024-12-05T12:34:56.608Z] Copying: 124/1024 [MB] (14 MBps) [2024-12-05T12:34:57.551Z] Copying: 140/1024 [MB] (16 MBps) [2024-12-05T12:34:58.494Z] Copying: 159/1024 [MB] (18 MBps) [2024-12-05T12:34:59.439Z] Copying: 178/1024 [MB] (19 MBps) [2024-12-05T12:35:00.873Z] Copying: 188/1024 [MB] (10 MBps) [2024-12-05T12:35:01.484Z] Copying: 199/1024 [MB] (10 MBps) [2024-12-05T12:35:02.425Z] Copying: 210/1024 [MB] (11 MBps) [2024-12-05T12:35:03.808Z] Copying: 221/1024 [MB] (10 MBps) [2024-12-05T12:35:04.750Z] Copying: 231/1024 [MB] (10 MBps) [2024-12-05T12:35:05.690Z] Copying: 242/1024 [MB] (10 MBps) [2024-12-05T12:35:06.630Z] Copying: 255/1024 [MB] (12 MBps) [2024-12-05T12:35:07.566Z] Copying: 267/1024 [MB] (11 MBps) [2024-12-05T12:35:08.508Z] Copying: 279/1024 [MB] (12 MBps) [2024-12-05T12:35:09.451Z] Copying: 290/1024 [MB] (11 MBps) [2024-12-05T12:35:10.395Z] Copying: 308/1024 [MB] (17 MBps) [2024-12-05T12:35:11.778Z] Copying: 321/1024 [MB] (12 MBps) [2024-12-05T12:35:12.721Z] Copying: 336/1024 [MB] (14 MBps) [2024-12-05T12:35:13.662Z] Copying: 352/1024 [MB] (15 MBps) [2024-12-05T12:35:14.605Z] Copying: 362/1024 [MB] (10 MBps) [2024-12-05T12:35:15.548Z] Copying: 372/1024 [MB] (10 MBps) [2024-12-05T12:35:16.483Z] Copying: 383/1024 [MB] (10 MBps) [2024-12-05T12:35:17.415Z] Copying: 395/1024 [MB] (11 MBps) [2024-12-05T12:35:18.800Z] Copying: 411/1024 [MB] (16 MBps) [2024-12-05T12:35:19.738Z] Copying: 423/1024 [MB] (11 MBps) [2024-12-05T12:35:20.682Z] Copying: 434/1024 [MB] (11 MBps) [2024-12-05T12:35:21.626Z] Copying: 454/1024 [MB] (19 MBps) [2024-12-05T12:35:22.568Z] Copying: 467/1024 [MB] (12 MBps) [2024-12-05T12:35:23.509Z] Copying: 480/1024 [MB] (12 MBps) [2024-12-05T12:35:24.452Z] Copying: 490/1024 [MB] (10 MBps) [2024-12-05T12:35:25.396Z] Copying: 511760/1048576 [kB] (9764 kBps) [2024-12-05T12:35:26.788Z] Copying: 509/1024 [MB] (10 MBps) [2024-12-05T12:35:27.732Z] Copying: 521/1024 [MB] (11 MBps) [2024-12-05T12:35:28.675Z] Copying: 532/1024 [MB] (10 MBps) [2024-12-05T12:35:29.617Z] Copying: 545/1024 [MB] (13 MBps) [2024-12-05T12:35:30.560Z] Copying: 555/1024 [MB] (10 MBps) [2024-12-05T12:35:31.504Z] Copying: 566/1024 [MB] (10 MBps) [2024-12-05T12:35:32.449Z] Copying: 576/1024 [MB] (10 MBps) [2024-12-05T12:35:33.455Z] Copying: 586/1024 [MB] (10 MBps) [2024-12-05T12:35:34.397Z] Copying: 597/1024 [MB] (10 MBps) [2024-12-05T12:35:35.778Z] Copying: 609/1024 [MB] (12 MBps) [2024-12-05T12:35:36.716Z] Copying: 620/1024 [MB] (10 MBps) [2024-12-05T12:35:37.654Z] Copying: 630/1024 [MB] (10 MBps) [2024-12-05T12:35:38.591Z] Copying: 641/1024 [MB] (10 MBps) [2024-12-05T12:35:39.529Z] Copying: 654/1024 [MB] (13 MBps) [2024-12-05T12:35:40.494Z] Copying: 666/1024 [MB] (12 MBps) [2024-12-05T12:35:41.434Z] Copying: 678/1024 [MB] (11 MBps) [2024-12-05T12:35:42.816Z] Copying: 704360/1048576 [kB] (10088 kBps) [2024-12-05T12:35:43.760Z] Copying: 700/1024 [MB] (12 MBps) [2024-12-05T12:35:44.705Z] Copying: 713/1024 [MB] (13 MBps) [2024-12-05T12:35:45.652Z] Copying: 724/1024 [MB] (10 MBps) [2024-12-05T12:35:46.592Z] Copying: 736/1024 [MB] (11 MBps) [2024-12-05T12:35:47.535Z] Copying: 746/1024 [MB] (10 MBps) [2024-12-05T12:35:48.472Z] Copying: 758/1024 [MB] (12 MBps) [2024-12-05T12:35:49.415Z] Copying: 770/1024 [MB] (11 MBps) [2024-12-05T12:35:50.801Z] Copying: 782/1024 [MB] (12 MBps) [2024-12-05T12:35:51.747Z] Copying: 795/1024 [MB] (13 MBps) [2024-12-05T12:35:52.689Z] Copying: 806/1024 [MB] (10 MBps) [2024-12-05T12:35:53.635Z] Copying: 816/1024 [MB] (10 MBps) [2024-12-05T12:35:54.577Z] Copying: 846080/1048576 [kB] (9776 kBps) [2024-12-05T12:35:55.522Z] Copying: 843/1024 [MB] (17 MBps) [2024-12-05T12:35:56.465Z] Copying: 853/1024 [MB] (10 MBps) [2024-12-05T12:35:57.407Z] Copying: 864/1024 [MB] (10 MBps) [2024-12-05T12:35:58.792Z] Copying: 879/1024 [MB] (14 MBps) [2024-12-05T12:35:59.734Z] Copying: 893/1024 [MB] (14 MBps) [2024-12-05T12:36:00.676Z] Copying: 904/1024 [MB] (10 MBps) [2024-12-05T12:36:01.618Z] Copying: 914/1024 [MB] (10 MBps) [2024-12-05T12:36:02.562Z] Copying: 924/1024 [MB] (10 MBps) [2024-12-05T12:36:03.507Z] Copying: 936/1024 [MB] (11 MBps) [2024-12-05T12:36:04.469Z] Copying: 947/1024 [MB] (11 MBps) [2024-12-05T12:36:05.446Z] Copying: 958/1024 [MB] (11 MBps) [2024-12-05T12:36:06.829Z] Copying: 974/1024 [MB] (15 MBps) [2024-12-05T12:36:07.397Z] Copying: 986/1024 [MB] (11 MBps) [2024-12-05T12:36:08.773Z] Copying: 1006/1024 [MB] (20 MBps) [2024-12-05T12:36:08.773Z] Copying: 1021/1024 [MB] (14 MBps) [2024-12-05T12:36:09.034Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-05 12:36:08.830050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.165 [2024-12-05 12:36:08.830166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:38.165 [2024-12-05 12:36:08.830191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:38.165 [2024-12-05 12:36:08.830205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.165 [2024-12-05 12:36:08.830241] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:38.165 [2024-12-05 12:36:08.834704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.165 [2024-12-05 12:36:08.834761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:38.165 [2024-12-05 12:36:08.834777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.439 ms 00:32:38.165 [2024-12-05 12:36:08.834789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.165 [2024-12-05 12:36:08.835138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.165 [2024-12-05 12:36:08.835154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:38.165 [2024-12-05 12:36:08.835168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:32:38.165 [2024-12-05 12:36:08.835180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.165 [2024-12-05 12:36:08.841407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.165 [2024-12-05 12:36:08.841451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:38.165 [2024-12-05 12:36:08.841476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.204 ms 00:32:38.165 [2024-12-05 12:36:08.841494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.165 [2024-12-05 12:36:08.847733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.165 [2024-12-05 12:36:08.847771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:38.165 [2024-12-05 12:36:08.847783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.209 ms 00:32:38.165 [2024-12-05 12:36:08.847793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.165 [2024-12-05 12:36:08.875755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.165 [2024-12-05 12:36:08.875799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:38.165 [2024-12-05 12:36:08.875813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.876 ms 00:32:38.165 [2024-12-05 12:36:08.875825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.165 [2024-12-05 12:36:08.896894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.165 [2024-12-05 12:36:08.896934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:38.165 [2024-12-05 12:36:08.896948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.018 ms 00:32:38.165 [2024-12-05 12:36:08.896958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.165 [2024-12-05 12:36:08.903242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.165 [2024-12-05 12:36:08.903282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:38.165 [2024-12-05 12:36:08.903293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.224 ms 00:32:38.165 [2024-12-05 12:36:08.903303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.165 [2024-12-05 12:36:08.929483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.165 [2024-12-05 12:36:08.929524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:38.165 [2024-12-05 12:36:08.929536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.162 ms 00:32:38.165 [2024-12-05 12:36:08.929545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.165 [2024-12-05 12:36:08.955097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.165 [2024-12-05 12:36:08.955134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:38.165 [2024-12-05 12:36:08.955146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.506 ms 00:32:38.165 [2024-12-05 12:36:08.955154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.165 [2024-12-05 12:36:08.979890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.165 [2024-12-05 12:36:08.979927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:38.165 [2024-12-05 12:36:08.979939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.690 ms 00:32:38.165 [2024-12-05 12:36:08.979947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.165 [2024-12-05 12:36:09.004773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.165 [2024-12-05 12:36:09.004811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:38.165 [2024-12-05 12:36:09.004823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.749 ms 00:32:38.165 [2024-12-05 12:36:09.004832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.165 [2024-12-05 12:36:09.004876] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:38.165 [2024-12-05 12:36:09.004904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:38.165 [2024-12-05 12:36:09.004920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1792 / 261120 wr_cnt: 1 state: open 00:32:38.165 [2024-12-05 12:36:09.004931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:38.165 [2024-12-05 12:36:09.004940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:38.165 [2024-12-05 12:36:09.004949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:38.165 [2024-12-05 12:36:09.004959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.004970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.004979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.004990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.004998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:38.166 [2024-12-05 12:36:09.005753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:38.167 [2024-12-05 12:36:09.005761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:38.167 [2024-12-05 12:36:09.005769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:38.167 [2024-12-05 12:36:09.005777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:38.167 [2024-12-05 12:36:09.005785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:38.167 [2024-12-05 12:36:09.005793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:38.167 [2024-12-05 12:36:09.005809] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:38.167 [2024-12-05 12:36:09.005818] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 213de987-76bc-4387-96cd-ea9d7da49553 00:32:38.167 [2024-12-05 12:36:09.005826] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262912 00:32:38.167 [2024-12-05 12:36:09.005834] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:38.167 [2024-12-05 12:36:09.005841] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:38.167 [2024-12-05 12:36:09.005850] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:38.167 [2024-12-05 12:36:09.005866] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:38.167 [2024-12-05 12:36:09.005874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:38.167 [2024-12-05 12:36:09.005882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:38.167 [2024-12-05 12:36:09.005891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:38.167 [2024-12-05 12:36:09.005898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:38.167 [2024-12-05 12:36:09.005906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.167 [2024-12-05 12:36:09.005915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:38.167 [2024-12-05 12:36:09.005925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:32:38.167 [2024-12-05 12:36:09.005936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.167 [2024-12-05 12:36:09.020570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.167 [2024-12-05 12:36:09.020606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:38.167 [2024-12-05 12:36:09.020619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.614 ms 00:32:38.167 [2024-12-05 12:36:09.020629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.167 [2024-12-05 12:36:09.021084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.167 [2024-12-05 12:36:09.021113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:38.167 [2024-12-05 12:36:09.021124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:32:38.167 [2024-12-05 12:36:09.021145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.428 [2024-12-05 12:36:09.061325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.428 [2024-12-05 12:36:09.061372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:38.428 [2024-12-05 12:36:09.061386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.428 [2024-12-05 12:36:09.061397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.428 [2024-12-05 12:36:09.061490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.428 [2024-12-05 12:36:09.061510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:38.428 [2024-12-05 12:36:09.061521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.428 [2024-12-05 12:36:09.061531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.428 [2024-12-05 12:36:09.061627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.428 [2024-12-05 12:36:09.061641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:38.428 [2024-12-05 12:36:09.061651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.428 [2024-12-05 12:36:09.061661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.428 [2024-12-05 12:36:09.061681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.428 [2024-12-05 12:36:09.061692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:38.428 [2024-12-05 12:36:09.061707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.428 [2024-12-05 12:36:09.061715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.428 [2024-12-05 12:36:09.152523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.428 [2024-12-05 12:36:09.152587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:38.428 [2024-12-05 12:36:09.152602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.428 [2024-12-05 12:36:09.152612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.428 [2024-12-05 12:36:09.226842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.428 [2024-12-05 12:36:09.226912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:38.428 [2024-12-05 12:36:09.226927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.428 [2024-12-05 12:36:09.226937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.428 [2024-12-05 12:36:09.227015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.428 [2024-12-05 12:36:09.227026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:38.428 [2024-12-05 12:36:09.227037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.428 [2024-12-05 12:36:09.227046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.428 [2024-12-05 12:36:09.227122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.428 [2024-12-05 12:36:09.227134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:38.428 [2024-12-05 12:36:09.227144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.428 [2024-12-05 12:36:09.227156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.428 [2024-12-05 12:36:09.227275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.428 [2024-12-05 12:36:09.227288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:38.428 [2024-12-05 12:36:09.227298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.428 [2024-12-05 12:36:09.227307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.428 [2024-12-05 12:36:09.227345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.428 [2024-12-05 12:36:09.227357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:38.428 [2024-12-05 12:36:09.227367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.428 [2024-12-05 12:36:09.227376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.428 [2024-12-05 12:36:09.227432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.428 [2024-12-05 12:36:09.227443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:38.428 [2024-12-05 12:36:09.227452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.428 [2024-12-05 12:36:09.227483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.428 [2024-12-05 12:36:09.227547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:38.428 [2024-12-05 12:36:09.227560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:38.428 [2024-12-05 12:36:09.227570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:38.428 [2024-12-05 12:36:09.227581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.428 [2024-12-05 12:36:09.227746] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 397.660 ms, result 0 00:32:39.369 00:32:39.369 00:32:39.369 12:36:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:41.916 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80939 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80939 ']' 00:32:41.916 Process with pid 80939 is not found 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80939 00:32:41.916 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80939) - No such process 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80939 is not found' 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:32:41.916 Remove shared memory files 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:41.916 00:32:41.916 real 5m13.258s 00:32:41.916 user 5m42.024s 00:32:41.916 sys 0m28.967s 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:41.916 ************************************ 00:32:41.916 END TEST ftl_dirty_shutdown 00:32:41.916 ************************************ 00:32:41.916 12:36:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:42.178 12:36:12 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:42.178 12:36:12 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:42.178 12:36:12 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:42.178 12:36:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:42.178 ************************************ 00:32:42.178 START TEST ftl_upgrade_shutdown 00:32:42.178 ************************************ 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:42.178 * Looking for test storage... 00:32:42.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:42.178 12:36:12 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:42.178 12:36:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:42.178 12:36:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:42.178 12:36:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:42.178 12:36:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:42.178 12:36:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:42.178 12:36:13 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:32:42.178 12:36:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:42.178 12:36:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:42.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.178 --rc genhtml_branch_coverage=1 00:32:42.178 --rc genhtml_function_coverage=1 00:32:42.178 --rc genhtml_legend=1 00:32:42.178 --rc geninfo_all_blocks=1 00:32:42.178 --rc geninfo_unexecuted_blocks=1 00:32:42.178 00:32:42.178 ' 00:32:42.178 12:36:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:42.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.178 --rc genhtml_branch_coverage=1 00:32:42.178 --rc genhtml_function_coverage=1 00:32:42.178 --rc genhtml_legend=1 00:32:42.178 --rc geninfo_all_blocks=1 00:32:42.178 --rc geninfo_unexecuted_blocks=1 00:32:42.178 00:32:42.178 ' 00:32:42.178 12:36:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:42.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.178 --rc genhtml_branch_coverage=1 00:32:42.178 --rc genhtml_function_coverage=1 00:32:42.178 --rc genhtml_legend=1 00:32:42.178 --rc geninfo_all_blocks=1 00:32:42.178 --rc geninfo_unexecuted_blocks=1 00:32:42.178 00:32:42.178 ' 00:32:42.178 12:36:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:42.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:42.178 --rc genhtml_branch_coverage=1 00:32:42.178 --rc genhtml_function_coverage=1 00:32:42.178 --rc genhtml_legend=1 00:32:42.178 --rc geninfo_all_blocks=1 00:32:42.178 --rc geninfo_unexecuted_blocks=1 00:32:42.178 00:32:42.178 ' 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84266 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84266 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84266 ']' 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:42.179 12:36:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:42.441 [2024-12-05 12:36:13.120447] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:32:42.441 [2024-12-05 12:36:13.120611] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84266 ] 00:32:42.441 [2024-12-05 12:36:13.288376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.703 [2024-12-05 12:36:13.441756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:43.653 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:32:43.916 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:32:43.916 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:43.916 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:32:43.916 12:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:32:43.916 12:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:43.916 12:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:43.916 12:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:43.916 12:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:32:44.178 12:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:44.178 { 00:32:44.178 "name": "basen1", 00:32:44.178 "aliases": [ 00:32:44.178 "b4c545a6-7e57-486a-981b-254683522add" 00:32:44.178 ], 00:32:44.178 "product_name": "NVMe disk", 00:32:44.178 "block_size": 4096, 00:32:44.178 "num_blocks": 1310720, 00:32:44.178 "uuid": "b4c545a6-7e57-486a-981b-254683522add", 00:32:44.178 "numa_id": -1, 00:32:44.178 "assigned_rate_limits": { 00:32:44.178 "rw_ios_per_sec": 0, 00:32:44.178 "rw_mbytes_per_sec": 0, 00:32:44.178 "r_mbytes_per_sec": 0, 00:32:44.178 "w_mbytes_per_sec": 0 00:32:44.178 }, 00:32:44.178 "claimed": true, 00:32:44.178 "claim_type": "read_many_write_one", 00:32:44.178 "zoned": false, 00:32:44.178 "supported_io_types": { 00:32:44.178 "read": true, 00:32:44.178 "write": true, 00:32:44.178 "unmap": true, 00:32:44.178 "flush": true, 00:32:44.178 "reset": true, 00:32:44.178 "nvme_admin": true, 00:32:44.178 "nvme_io": true, 00:32:44.178 "nvme_io_md": false, 00:32:44.178 "write_zeroes": true, 00:32:44.178 "zcopy": false, 00:32:44.178 "get_zone_info": false, 00:32:44.178 "zone_management": false, 00:32:44.178 "zone_append": false, 00:32:44.179 "compare": true, 00:32:44.179 "compare_and_write": false, 00:32:44.179 "abort": true, 00:32:44.179 "seek_hole": false, 00:32:44.179 "seek_data": false, 00:32:44.179 "copy": true, 00:32:44.179 "nvme_iov_md": false 00:32:44.179 }, 00:32:44.179 "driver_specific": { 00:32:44.179 "nvme": [ 00:32:44.179 { 00:32:44.179 "pci_address": "0000:00:11.0", 00:32:44.179 "trid": { 00:32:44.179 "trtype": "PCIe", 00:32:44.179 "traddr": "0000:00:11.0" 00:32:44.179 }, 00:32:44.179 "ctrlr_data": { 00:32:44.179 "cntlid": 0, 00:32:44.179 "vendor_id": "0x1b36", 00:32:44.179 "model_number": "QEMU NVMe Ctrl", 00:32:44.179 "serial_number": "12341", 00:32:44.179 "firmware_revision": "8.0.0", 00:32:44.179 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:44.179 "oacs": { 00:32:44.179 "security": 0, 00:32:44.179 "format": 1, 00:32:44.179 "firmware": 0, 00:32:44.179 "ns_manage": 1 00:32:44.179 }, 00:32:44.179 "multi_ctrlr": false, 00:32:44.179 "ana_reporting": false 00:32:44.179 }, 00:32:44.179 "vs": { 00:32:44.179 "nvme_version": "1.4" 00:32:44.179 }, 00:32:44.179 "ns_data": { 00:32:44.179 "id": 1, 00:32:44.179 "can_share": false 00:32:44.179 } 00:32:44.179 } 00:32:44.179 ], 00:32:44.179 "mp_policy": "active_passive" 00:32:44.179 } 00:32:44.179 } 00:32:44.179 ]' 00:32:44.179 12:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:44.179 12:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:44.179 12:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:44.179 12:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:44.179 12:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:44.179 12:36:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:44.179 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:44.179 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:32:44.179 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:44.179 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:44.179 12:36:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:44.440 12:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=0c132bd4-defb-4f31-a8ff-77f24ab4abc6 00:32:44.440 12:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:44.440 12:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c132bd4-defb-4f31-a8ff-77f24ab4abc6 00:32:44.701 12:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:32:44.701 12:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=7733dfa2-0af4-45dd-b3e0-adf154e8cea2 00:32:44.701 12:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 7733dfa2-0af4-45dd-b3e0-adf154e8cea2 00:32:44.962 12:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=bf7ba184-4647-429d-9bf8-7154ec6c3ed1 00:32:44.962 12:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z bf7ba184-4647-429d-9bf8-7154ec6c3ed1 ]] 00:32:44.962 12:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 bf7ba184-4647-429d-9bf8-7154ec6c3ed1 5120 00:32:44.962 12:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:32:44.962 12:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:44.962 12:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=bf7ba184-4647-429d-9bf8-7154ec6c3ed1 00:32:44.962 12:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:32:44.962 12:36:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size bf7ba184-4647-429d-9bf8-7154ec6c3ed1 00:32:44.962 12:36:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=bf7ba184-4647-429d-9bf8-7154ec6c3ed1 00:32:44.962 12:36:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:44.962 12:36:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:44.962 12:36:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:44.962 12:36:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bf7ba184-4647-429d-9bf8-7154ec6c3ed1 00:32:45.222 12:36:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:45.222 { 00:32:45.222 "name": "bf7ba184-4647-429d-9bf8-7154ec6c3ed1", 00:32:45.222 "aliases": [ 00:32:45.222 "lvs/basen1p0" 00:32:45.222 ], 00:32:45.222 "product_name": "Logical Volume", 00:32:45.222 "block_size": 4096, 00:32:45.222 "num_blocks": 5242880, 00:32:45.222 "uuid": "bf7ba184-4647-429d-9bf8-7154ec6c3ed1", 00:32:45.222 "assigned_rate_limits": { 00:32:45.222 "rw_ios_per_sec": 0, 00:32:45.222 "rw_mbytes_per_sec": 0, 00:32:45.222 "r_mbytes_per_sec": 0, 00:32:45.222 "w_mbytes_per_sec": 0 00:32:45.222 }, 00:32:45.222 "claimed": false, 00:32:45.222 "zoned": false, 00:32:45.222 "supported_io_types": { 00:32:45.222 "read": true, 00:32:45.222 "write": true, 00:32:45.222 "unmap": true, 00:32:45.222 "flush": false, 00:32:45.222 "reset": true, 00:32:45.222 "nvme_admin": false, 00:32:45.222 "nvme_io": false, 00:32:45.222 "nvme_io_md": false, 00:32:45.222 "write_zeroes": true, 00:32:45.222 "zcopy": false, 00:32:45.222 "get_zone_info": false, 00:32:45.222 "zone_management": false, 00:32:45.222 "zone_append": false, 00:32:45.222 "compare": false, 00:32:45.222 "compare_and_write": false, 00:32:45.222 "abort": false, 00:32:45.222 "seek_hole": true, 00:32:45.222 "seek_data": true, 00:32:45.222 "copy": false, 00:32:45.222 "nvme_iov_md": false 00:32:45.222 }, 00:32:45.222 "driver_specific": { 00:32:45.222 "lvol": { 00:32:45.222 "lvol_store_uuid": "7733dfa2-0af4-45dd-b3e0-adf154e8cea2", 00:32:45.222 "base_bdev": "basen1", 00:32:45.222 "thin_provision": true, 00:32:45.222 "num_allocated_clusters": 0, 00:32:45.222 "snapshot": false, 00:32:45.222 "clone": false, 00:32:45.222 "esnap_clone": false 00:32:45.222 } 00:32:45.222 } 00:32:45.222 } 00:32:45.222 ]' 00:32:45.222 12:36:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:45.222 12:36:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:45.222 12:36:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:45.222 12:36:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:32:45.222 12:36:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:32:45.222 12:36:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:32:45.222 12:36:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:32:45.222 12:36:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:45.222 12:36:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:32:45.482 12:36:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:32:45.482 12:36:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:32:45.482 12:36:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:32:45.743 12:36:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:32:45.743 12:36:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:32:45.743 12:36:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d bf7ba184-4647-429d-9bf8-7154ec6c3ed1 -c cachen1p0 --l2p_dram_limit 2 00:32:46.004 [2024-12-05 12:36:16.734713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.004 [2024-12-05 12:36:16.734952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:46.004 [2024-12-05 12:36:16.734987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:46.004 [2024-12-05 12:36:16.734997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.004 [2024-12-05 12:36:16.735092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.004 [2024-12-05 12:36:16.735104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:46.004 [2024-12-05 12:36:16.735117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:32:46.004 [2024-12-05 12:36:16.735126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.004 [2024-12-05 12:36:16.735152] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:46.004 [2024-12-05 12:36:16.735964] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:46.004 [2024-12-05 12:36:16.735997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.004 [2024-12-05 12:36:16.736007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:46.004 [2024-12-05 12:36:16.736020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.847 ms 00:32:46.004 [2024-12-05 12:36:16.736029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.004 [2024-12-05 12:36:16.736074] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID c41d8d62-bbc6-4b39-9d35-fd5f270876d3 00:32:46.004 [2024-12-05 12:36:16.738508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.004 [2024-12-05 12:36:16.738562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:32:46.004 [2024-12-05 12:36:16.738576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:32:46.005 [2024-12-05 12:36:16.738588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.005 [2024-12-05 12:36:16.751482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.005 [2024-12-05 12:36:16.751538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:46.005 [2024-12-05 12:36:16.751551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.785 ms 00:32:46.005 [2024-12-05 12:36:16.751563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.005 [2024-12-05 12:36:16.751618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.005 [2024-12-05 12:36:16.751630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:46.005 [2024-12-05 12:36:16.751639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:32:46.005 [2024-12-05 12:36:16.751654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.005 [2024-12-05 12:36:16.751715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.005 [2024-12-05 12:36:16.751730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:46.005 [2024-12-05 12:36:16.751743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:46.005 [2024-12-05 12:36:16.751757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.005 [2024-12-05 12:36:16.751781] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:46.005 [2024-12-05 12:36:16.756921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.005 [2024-12-05 12:36:16.756968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:46.005 [2024-12-05 12:36:16.756985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.142 ms 00:32:46.005 [2024-12-05 12:36:16.756994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.005 [2024-12-05 12:36:16.757029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.005 [2024-12-05 12:36:16.757038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:46.005 [2024-12-05 12:36:16.757050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:46.005 [2024-12-05 12:36:16.757058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.005 [2024-12-05 12:36:16.757098] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:32:46.005 [2024-12-05 12:36:16.757258] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:46.005 [2024-12-05 12:36:16.757280] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:46.005 [2024-12-05 12:36:16.757292] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:46.005 [2024-12-05 12:36:16.757309] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:46.005 [2024-12-05 12:36:16.757318] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:46.005 [2024-12-05 12:36:16.757331] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:46.005 [2024-12-05 12:36:16.757340] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:46.005 [2024-12-05 12:36:16.757356] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:46.005 [2024-12-05 12:36:16.757363] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:46.005 [2024-12-05 12:36:16.757374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.005 [2024-12-05 12:36:16.757383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:46.005 [2024-12-05 12:36:16.757395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.280 ms 00:32:46.005 [2024-12-05 12:36:16.757403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.005 [2024-12-05 12:36:16.757519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.005 [2024-12-05 12:36:16.757541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:46.005 [2024-12-05 12:36:16.757553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.097 ms 00:32:46.005 [2024-12-05 12:36:16.757561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.005 [2024-12-05 12:36:16.757671] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:46.005 [2024-12-05 12:36:16.757683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:46.005 [2024-12-05 12:36:16.757695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:46.005 [2024-12-05 12:36:16.757703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.005 [2024-12-05 12:36:16.757714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:46.005 [2024-12-05 12:36:16.757722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:46.005 [2024-12-05 12:36:16.757732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:46.005 [2024-12-05 12:36:16.757738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:46.005 [2024-12-05 12:36:16.757747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:46.005 [2024-12-05 12:36:16.757756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.005 [2024-12-05 12:36:16.757767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:46.005 [2024-12-05 12:36:16.757775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:46.005 [2024-12-05 12:36:16.757784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.005 [2024-12-05 12:36:16.757792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:46.005 [2024-12-05 12:36:16.757801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:46.005 [2024-12-05 12:36:16.757808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.005 [2024-12-05 12:36:16.757820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:46.005 [2024-12-05 12:36:16.757829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:46.005 [2024-12-05 12:36:16.757840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.005 [2024-12-05 12:36:16.757847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:46.005 [2024-12-05 12:36:16.757861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:46.005 [2024-12-05 12:36:16.757869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:46.005 [2024-12-05 12:36:16.757879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:46.005 [2024-12-05 12:36:16.757886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:46.005 [2024-12-05 12:36:16.757895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:46.005 [2024-12-05 12:36:16.757902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:46.005 [2024-12-05 12:36:16.757911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:46.005 [2024-12-05 12:36:16.757919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:46.005 [2024-12-05 12:36:16.757928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:46.005 [2024-12-05 12:36:16.757935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:46.005 [2024-12-05 12:36:16.757944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:46.005 [2024-12-05 12:36:16.757951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:46.005 [2024-12-05 12:36:16.757963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:46.005 [2024-12-05 12:36:16.757971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.005 [2024-12-05 12:36:16.757980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:46.005 [2024-12-05 12:36:16.757986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:46.005 [2024-12-05 12:36:16.757996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.005 [2024-12-05 12:36:16.758002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:46.005 [2024-12-05 12:36:16.758011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:46.005 [2024-12-05 12:36:16.758019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.005 [2024-12-05 12:36:16.758030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:46.005 [2024-12-05 12:36:16.758036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:46.005 [2024-12-05 12:36:16.758045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.005 [2024-12-05 12:36:16.758051] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:46.005 [2024-12-05 12:36:16.758063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:46.005 [2024-12-05 12:36:16.758072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:46.005 [2024-12-05 12:36:16.758083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:46.005 [2024-12-05 12:36:16.758092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:46.005 [2024-12-05 12:36:16.758104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:46.005 [2024-12-05 12:36:16.758112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:46.005 [2024-12-05 12:36:16.758122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:46.005 [2024-12-05 12:36:16.758128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:46.005 [2024-12-05 12:36:16.758138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:46.005 [2024-12-05 12:36:16.758151] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:46.005 [2024-12-05 12:36:16.758167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:46.005 [2024-12-05 12:36:16.758176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:46.005 [2024-12-05 12:36:16.758185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:46.005 [2024-12-05 12:36:16.758192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:46.005 [2024-12-05 12:36:16.758202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:46.005 [2024-12-05 12:36:16.758210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:46.005 [2024-12-05 12:36:16.758220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:46.005 [2024-12-05 12:36:16.758227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:46.005 [2024-12-05 12:36:16.758237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:46.006 [2024-12-05 12:36:16.758245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:46.006 [2024-12-05 12:36:16.758258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:46.006 [2024-12-05 12:36:16.758268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:46.006 [2024-12-05 12:36:16.758282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:46.006 [2024-12-05 12:36:16.758294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:46.006 [2024-12-05 12:36:16.758308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:46.006 [2024-12-05 12:36:16.758316] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:46.006 [2024-12-05 12:36:16.758328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:46.006 [2024-12-05 12:36:16.758337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:46.006 [2024-12-05 12:36:16.758348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:46.006 [2024-12-05 12:36:16.758355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:46.006 [2024-12-05 12:36:16.758371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:46.006 [2024-12-05 12:36:16.758383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:46.006 [2024-12-05 12:36:16.758393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:46.006 [2024-12-05 12:36:16.758401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.784 ms 00:32:46.006 [2024-12-05 12:36:16.758412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:46.006 [2024-12-05 12:36:16.758455] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:46.006 [2024-12-05 12:36:16.758813] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:50.205 [2024-12-05 12:36:21.012074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.205 [2024-12-05 12:36:21.012151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:50.205 [2024-12-05 12:36:21.012167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4253.604 ms 00:32:50.205 [2024-12-05 12:36:21.012177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.205 [2024-12-05 12:36:21.037243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.205 [2024-12-05 12:36:21.037292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:50.205 [2024-12-05 12:36:21.037304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.871 ms 00:32:50.205 [2024-12-05 12:36:21.037313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.205 [2024-12-05 12:36:21.037378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.205 [2024-12-05 12:36:21.037389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:50.205 [2024-12-05 12:36:21.037396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:32:50.205 [2024-12-05 12:36:21.037410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.205 [2024-12-05 12:36:21.065204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.205 [2024-12-05 12:36:21.065305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:50.205 [2024-12-05 12:36:21.065317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.755 ms 00:32:50.205 [2024-12-05 12:36:21.065327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.205 [2024-12-05 12:36:21.065354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.205 [2024-12-05 12:36:21.065365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:50.205 [2024-12-05 12:36:21.065373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:50.205 [2024-12-05 12:36:21.065380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.205 [2024-12-05 12:36:21.065810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.205 [2024-12-05 12:36:21.065830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:50.205 [2024-12-05 12:36:21.065844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.393 ms 00:32:50.205 [2024-12-05 12:36:21.065852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.205 [2024-12-05 12:36:21.065885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.205 [2024-12-05 12:36:21.065895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:50.205 [2024-12-05 12:36:21.065903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:32:50.205 [2024-12-05 12:36:21.065914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.466 [2024-12-05 12:36:21.079714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.466 [2024-12-05 12:36:21.079845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:50.466 [2024-12-05 12:36:21.079860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.785 ms 00:32:50.466 [2024-12-05 12:36:21.079868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.466 [2024-12-05 12:36:21.102441] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:50.466 [2024-12-05 12:36:21.103415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.466 [2024-12-05 12:36:21.103443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:50.466 [2024-12-05 12:36:21.103454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.478 ms 00:32:50.466 [2024-12-05 12:36:21.103475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.466 [2024-12-05 12:36:21.127328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.466 [2024-12-05 12:36:21.127360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:50.466 [2024-12-05 12:36:21.127371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.819 ms 00:32:50.466 [2024-12-05 12:36:21.127379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.466 [2024-12-05 12:36:21.127455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.466 [2024-12-05 12:36:21.127479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:50.466 [2024-12-05 12:36:21.127490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:32:50.466 [2024-12-05 12:36:21.127497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.466 [2024-12-05 12:36:21.146039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.466 [2024-12-05 12:36:21.146067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:50.466 [2024-12-05 12:36:21.146078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.505 ms 00:32:50.466 [2024-12-05 12:36:21.146085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.466 [2024-12-05 12:36:21.164276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.466 [2024-12-05 12:36:21.164302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:50.466 [2024-12-05 12:36:21.164313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.155 ms 00:32:50.466 [2024-12-05 12:36:21.164320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.466 [2024-12-05 12:36:21.164838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.466 [2024-12-05 12:36:21.164849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:50.466 [2024-12-05 12:36:21.164859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.488 ms 00:32:50.466 [2024-12-05 12:36:21.164868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.466 [2024-12-05 12:36:21.243655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.466 [2024-12-05 12:36:21.243798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:50.466 [2024-12-05 12:36:21.243825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 78.756 ms 00:32:50.466 [2024-12-05 12:36:21.243835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.466 [2024-12-05 12:36:21.269275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.466 [2024-12-05 12:36:21.269314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:50.466 [2024-12-05 12:36:21.269329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.099 ms 00:32:50.466 [2024-12-05 12:36:21.269338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.466 [2024-12-05 12:36:21.293012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.466 [2024-12-05 12:36:21.293046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:50.466 [2024-12-05 12:36:21.293058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.634 ms 00:32:50.466 [2024-12-05 12:36:21.293066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.466 [2024-12-05 12:36:21.316868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.466 [2024-12-05 12:36:21.316900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:50.466 [2024-12-05 12:36:21.316913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.767 ms 00:32:50.466 [2024-12-05 12:36:21.316920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.466 [2024-12-05 12:36:21.316962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.467 [2024-12-05 12:36:21.316971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:50.467 [2024-12-05 12:36:21.316984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:50.467 [2024-12-05 12:36:21.316991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.467 [2024-12-05 12:36:21.317071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:50.467 [2024-12-05 12:36:21.317083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:50.467 [2024-12-05 12:36:21.317094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:32:50.467 [2024-12-05 12:36:21.317102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:50.467 [2024-12-05 12:36:21.318258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4582.928 ms, result 0 00:32:50.467 { 00:32:50.467 "name": "ftl", 00:32:50.467 "uuid": "c41d8d62-bbc6-4b39-9d35-fd5f270876d3" 00:32:50.467 } 00:32:50.726 12:36:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:50.726 [2024-12-05 12:36:21.529413] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.726 12:36:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:50.986 12:36:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:51.246 [2024-12-05 12:36:21.953923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:51.246 12:36:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:51.506 [2024-12-05 12:36:22.171989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:51.506 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:51.767 Fill FTL, iteration 1 00:32:51.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84400 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84400 /var/tmp/spdk.tgt.sock 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84400 ']' 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:51.767 12:36:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:51.767 [2024-12-05 12:36:22.622022] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:32:51.767 [2024-12-05 12:36:22.622923] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84400 ] 00:32:52.028 [2024-12-05 12:36:22.780838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.288 [2024-12-05 12:36:22.898329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.858 12:36:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.858 12:36:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:52.858 12:36:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:32:53.119 ftln1 00:32:53.119 12:36:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:32:53.119 12:36:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:32:53.379 12:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:32:53.379 12:36:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84400 00:32:53.379 12:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84400 ']' 00:32:53.379 12:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84400 00:32:53.379 12:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:53.379 12:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:53.379 12:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84400 00:32:53.379 killing process with pid 84400 00:32:53.379 12:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:53.379 12:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:53.379 12:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84400' 00:32:53.379 12:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84400 00:32:53.379 12:36:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84400 00:32:55.285 12:36:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:32:55.285 12:36:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:55.285 [2024-12-05 12:36:25.704070] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:32:55.285 [2024-12-05 12:36:25.704761] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84442 ] 00:32:55.285 [2024-12-05 12:36:25.861962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.285 [2024-12-05 12:36:25.940645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.715  [2024-12-05T12:36:28.556Z] Copying: 255/1024 [MB] (255 MBps) [2024-12-05T12:36:29.494Z] Copying: 510/1024 [MB] (255 MBps) [2024-12-05T12:36:30.432Z] Copying: 761/1024 [MB] (251 MBps) [2024-12-05T12:36:30.432Z] Copying: 1023/1024 [MB] (262 MBps) [2024-12-05T12:36:31.003Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:33:00.134 00:33:00.134 Calculate MD5 checksum, iteration 1 00:33:00.134 12:36:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:33:00.134 12:36:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:33:00.134 12:36:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:00.134 12:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:00.134 12:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:00.134 12:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:00.134 12:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:00.134 12:36:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:00.134 [2024-12-05 12:36:30.907342] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:33:00.134 [2024-12-05 12:36:30.907453] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84500 ] 00:33:00.394 [2024-12-05 12:36:31.064354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.394 [2024-12-05 12:36:31.166121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.779  [2024-12-05T12:36:33.593Z] Copying: 555/1024 [MB] (555 MBps) [2024-12-05T12:36:34.164Z] Copying: 1024/1024 [MB] (average 554 MBps) 00:33:03.295 00:33:03.295 12:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:33:03.295 12:36:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:05.194 12:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:05.194 12:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=e291f2a5866140c8f49d65676dc85f11 00:33:05.194 Fill FTL, iteration 2 00:33:05.194 12:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:05.194 12:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:05.194 12:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:33:05.194 12:36:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:05.194 12:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:05.194 12:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:05.194 12:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:05.194 12:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:05.194 12:36:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:05.451 [2024-12-05 12:36:36.113233] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:33:05.451 [2024-12-05 12:36:36.113514] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84563 ] 00:33:05.451 [2024-12-05 12:36:36.274599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.708 [2024-12-05 12:36:36.372507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.083  [2024-12-05T12:36:38.891Z] Copying: 210/1024 [MB] (210 MBps) [2024-12-05T12:36:39.833Z] Copying: 455/1024 [MB] (245 MBps) [2024-12-05T12:36:40.797Z] Copying: 702/1024 [MB] (247 MBps) [2024-12-05T12:36:41.057Z] Copying: 948/1024 [MB] (246 MBps) [2024-12-05T12:36:41.628Z] Copying: 1024/1024 [MB] (average 238 MBps) 00:33:10.759 00:33:10.759 Calculate MD5 checksum, iteration 2 00:33:10.759 12:36:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:33:10.759 12:36:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:33:10.759 12:36:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:10.759 12:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:10.759 12:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:10.759 12:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:10.759 12:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:10.759 12:36:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:11.019 [2024-12-05 12:36:41.671615] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:33:11.019 [2024-12-05 12:36:41.672067] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84621 ] 00:33:11.019 [2024-12-05 12:36:41.830361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.279 [2024-12-05 12:36:41.907730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.664  [2024-12-05T12:36:44.103Z] Copying: 667/1024 [MB] (667 MBps) [2024-12-05T12:36:44.674Z] Copying: 1024/1024 [MB] (average 650 MBps) 00:33:13.805 00:33:13.805 12:36:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:33:13.805 12:36:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:16.335 12:36:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:16.335 12:36:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b8134c0647c01a94499f2f8a230241b9 00:33:16.335 12:36:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:16.335 12:36:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:16.335 12:36:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:16.335 [2024-12-05 12:36:46.940863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.335 [2024-12-05 12:36:46.941011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:16.335 [2024-12-05 12:36:46.941031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:16.335 [2024-12-05 12:36:46.941039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.335 [2024-12-05 12:36:46.941066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.335 [2024-12-05 12:36:46.941079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:16.335 [2024-12-05 12:36:46.941086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:16.335 [2024-12-05 12:36:46.941093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.335 [2024-12-05 12:36:46.941109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.336 [2024-12-05 12:36:46.941116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:16.336 [2024-12-05 12:36:46.941123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:16.336 [2024-12-05 12:36:46.941129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.336 [2024-12-05 12:36:46.941187] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.311 ms, result 0 00:33:16.336 true 00:33:16.336 12:36:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:16.336 { 00:33:16.336 "name": "ftl", 00:33:16.336 "properties": [ 00:33:16.336 { 00:33:16.336 "name": "superblock_version", 00:33:16.336 "value": 5, 00:33:16.336 "read-only": true 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "name": "base_device", 00:33:16.336 "bands": [ 00:33:16.336 { 00:33:16.336 "id": 0, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 1, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 2, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 3, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 4, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 5, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 6, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 7, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 8, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 9, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 10, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 11, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 12, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 13, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 14, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 15, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 16, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 17, 00:33:16.336 "state": "FREE", 00:33:16.336 "validity": 0.0 00:33:16.336 } 00:33:16.336 ], 00:33:16.336 "read-only": true 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "name": "cache_device", 00:33:16.336 "type": "bdev", 00:33:16.336 "chunks": [ 00:33:16.336 { 00:33:16.336 "id": 0, 00:33:16.336 "state": "INACTIVE", 00:33:16.336 "utilization": 0.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 1, 00:33:16.336 "state": "CLOSED", 00:33:16.336 "utilization": 1.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 2, 00:33:16.336 "state": "CLOSED", 00:33:16.336 "utilization": 1.0 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 3, 00:33:16.336 "state": "OPEN", 00:33:16.336 "utilization": 0.001953125 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "id": 4, 00:33:16.336 "state": "OPEN", 00:33:16.336 "utilization": 0.0 00:33:16.336 } 00:33:16.336 ], 00:33:16.336 "read-only": true 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "name": "verbose_mode", 00:33:16.336 "value": true, 00:33:16.336 "unit": "", 00:33:16.336 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:16.336 }, 00:33:16.336 { 00:33:16.336 "name": "prep_upgrade_on_shutdown", 00:33:16.336 "value": false, 00:33:16.336 "unit": "", 00:33:16.336 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:16.336 } 00:33:16.336 ] 00:33:16.336 } 00:33:16.336 12:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:33:16.594 [2024-12-05 12:36:47.317078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.594 [2024-12-05 12:36:47.317124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:16.594 [2024-12-05 12:36:47.317136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:16.594 [2024-12-05 12:36:47.317143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.594 [2024-12-05 12:36:47.317162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.594 [2024-12-05 12:36:47.317170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:16.594 [2024-12-05 12:36:47.317177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:16.594 [2024-12-05 12:36:47.317184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.594 [2024-12-05 12:36:47.317200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:16.594 [2024-12-05 12:36:47.317206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:16.594 [2024-12-05 12:36:47.317213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:16.594 [2024-12-05 12:36:47.317219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:16.594 [2024-12-05 12:36:47.317272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.184 ms, result 0 00:33:16.594 true 00:33:16.594 12:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:33:16.594 12:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:16.594 12:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:16.852 12:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:33:16.852 12:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:33:16.852 12:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:17.111 [2024-12-05 12:36:47.721390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.111 [2024-12-05 12:36:47.721555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:17.111 [2024-12-05 12:36:47.721604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:17.111 [2024-12-05 12:36:47.721631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.111 [2024-12-05 12:36:47.721668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.111 [2024-12-05 12:36:47.721686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:17.111 [2024-12-05 12:36:47.721702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:17.111 [2024-12-05 12:36:47.721717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.111 [2024-12-05 12:36:47.721742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.111 [2024-12-05 12:36:47.721759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:17.111 [2024-12-05 12:36:47.721775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:17.111 [2024-12-05 12:36:47.721822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.111 [2024-12-05 12:36:47.721887] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.483 ms, result 0 00:33:17.111 true 00:33:17.111 12:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:17.111 { 00:33:17.111 "name": "ftl", 00:33:17.111 "properties": [ 00:33:17.111 { 00:33:17.111 "name": "superblock_version", 00:33:17.111 "value": 5, 00:33:17.111 "read-only": true 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "name": "base_device", 00:33:17.111 "bands": [ 00:33:17.111 { 00:33:17.111 "id": 0, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 1, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 2, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 3, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 4, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 5, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 6, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 7, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 8, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 9, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 10, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 11, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 12, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 13, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 14, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 15, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 16, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 17, 00:33:17.111 "state": "FREE", 00:33:17.111 "validity": 0.0 00:33:17.111 } 00:33:17.111 ], 00:33:17.111 "read-only": true 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "name": "cache_device", 00:33:17.111 "type": "bdev", 00:33:17.111 "chunks": [ 00:33:17.111 { 00:33:17.111 "id": 0, 00:33:17.111 "state": "INACTIVE", 00:33:17.111 "utilization": 0.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 1, 00:33:17.111 "state": "CLOSED", 00:33:17.111 "utilization": 1.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 2, 00:33:17.111 "state": "CLOSED", 00:33:17.111 "utilization": 1.0 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 3, 00:33:17.111 "state": "OPEN", 00:33:17.111 "utilization": 0.001953125 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "id": 4, 00:33:17.111 "state": "OPEN", 00:33:17.111 "utilization": 0.0 00:33:17.111 } 00:33:17.111 ], 00:33:17.111 "read-only": true 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "name": "verbose_mode", 00:33:17.111 "value": true, 00:33:17.111 "unit": "", 00:33:17.111 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:17.111 }, 00:33:17.111 { 00:33:17.111 "name": "prep_upgrade_on_shutdown", 00:33:17.111 "value": true, 00:33:17.111 "unit": "", 00:33:17.112 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:17.112 } 00:33:17.112 ] 00:33:17.112 } 00:33:17.112 12:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:33:17.112 12:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84266 ]] 00:33:17.112 12:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84266 00:33:17.112 12:36:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84266 ']' 00:33:17.112 12:36:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84266 00:33:17.112 12:36:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:17.112 12:36:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:17.112 12:36:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84266 00:33:17.373 12:36:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:17.373 killing process with pid 84266 00:33:17.373 12:36:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:17.373 12:36:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84266' 00:33:17.373 12:36:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84266 00:33:17.373 12:36:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84266 00:33:17.944 [2024-12-05 12:36:48.555552] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:17.944 [2024-12-05 12:36:48.567815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.944 [2024-12-05 12:36:48.567852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:17.944 [2024-12-05 12:36:48.567864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:17.944 [2024-12-05 12:36:48.567870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:17.944 [2024-12-05 12:36:48.567888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:17.944 [2024-12-05 12:36:48.570162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:17.944 [2024-12-05 12:36:48.570188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:17.944 [2024-12-05 12:36:48.570197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.262 ms 00:33:17.944 [2024-12-05 12:36:48.570208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.933 [2024-12-05 12:36:57.639058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:27.933 [2024-12-05 12:36:57.639150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:27.933 [2024-12-05 12:36:57.639177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9068.794 ms 00:33:27.933 [2024-12-05 12:36:57.639187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.933 [2024-12-05 12:36:57.640812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:27.933 [2024-12-05 12:36:57.640857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:27.933 [2024-12-05 12:36:57.640871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.606 ms 00:33:27.933 [2024-12-05 12:36:57.640881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.933 [2024-12-05 12:36:57.642047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:27.933 [2024-12-05 12:36:57.642285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:27.933 [2024-12-05 12:36:57.642305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.132 ms 00:33:27.933 [2024-12-05 12:36:57.642323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.933 [2024-12-05 12:36:57.653973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:27.933 [2024-12-05 12:36:57.654020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:27.933 [2024-12-05 12:36:57.654033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.601 ms 00:33:27.933 [2024-12-05 12:36:57.654043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.933 [2024-12-05 12:36:57.661311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:27.933 [2024-12-05 12:36:57.661522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:27.933 [2024-12-05 12:36:57.661544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.222 ms 00:33:27.933 [2024-12-05 12:36:57.661553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.933 [2024-12-05 12:36:57.662001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:27.933 [2024-12-05 12:36:57.662060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:27.933 [2024-12-05 12:36:57.662075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.091 ms 00:33:27.933 [2024-12-05 12:36:57.662085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.933 [2024-12-05 12:36:57.673047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:27.933 [2024-12-05 12:36:57.673246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:27.933 [2024-12-05 12:36:57.673269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.943 ms 00:33:27.933 [2024-12-05 12:36:57.673277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.933 [2024-12-05 12:36:57.683864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:27.933 [2024-12-05 12:36:57.684035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:27.933 [2024-12-05 12:36:57.684053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.545 ms 00:33:27.933 [2024-12-05 12:36:57.684062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.933 [2024-12-05 12:36:57.694689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:27.933 [2024-12-05 12:36:57.694869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:27.933 [2024-12-05 12:36:57.694887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.587 ms 00:33:27.933 [2024-12-05 12:36:57.694896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.933 [2024-12-05 12:36:57.705355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:27.933 [2024-12-05 12:36:57.705399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:27.933 [2024-12-05 12:36:57.705410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.365 ms 00:33:27.933 [2024-12-05 12:36:57.705419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.933 [2024-12-05 12:36:57.705481] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:27.933 [2024-12-05 12:36:57.705510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:27.933 [2024-12-05 12:36:57.705523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:27.933 [2024-12-05 12:36:57.705532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:27.934 [2024-12-05 12:36:57.705542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:27.934 [2024-12-05 12:36:57.705704] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:27.934 [2024-12-05 12:36:57.705714] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c41d8d62-bbc6-4b39-9d35-fd5f270876d3 00:33:27.934 [2024-12-05 12:36:57.705723] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:27.934 [2024-12-05 12:36:57.705731] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:33:27.934 [2024-12-05 12:36:57.705739] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:33:27.934 [2024-12-05 12:36:57.705749] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:33:27.934 [2024-12-05 12:36:57.705764] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:27.934 [2024-12-05 12:36:57.705773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:27.934 [2024-12-05 12:36:57.705786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:27.934 [2024-12-05 12:36:57.705794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:27.934 [2024-12-05 12:36:57.705803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:27.934 [2024-12-05 12:36:57.705813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:27.934 [2024-12-05 12:36:57.705825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:27.934 [2024-12-05 12:36:57.705834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.333 ms 00:33:27.934 [2024-12-05 12:36:57.705843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.720736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:27.934 [2024-12-05 12:36:57.720895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:27.934 [2024-12-05 12:36:57.720923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.860 ms 00:33:27.934 [2024-12-05 12:36:57.720932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.721351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:27.934 [2024-12-05 12:36:57.721364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:27.934 [2024-12-05 12:36:57.721374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.396 ms 00:33:27.934 [2024-12-05 12:36:57.721384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.771192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:27.934 [2024-12-05 12:36:57.771246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:27.934 [2024-12-05 12:36:57.771259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:27.934 [2024-12-05 12:36:57.771268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.771307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:27.934 [2024-12-05 12:36:57.771317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:27.934 [2024-12-05 12:36:57.771326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:27.934 [2024-12-05 12:36:57.771334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.771416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:27.934 [2024-12-05 12:36:57.771429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:27.934 [2024-12-05 12:36:57.771444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:27.934 [2024-12-05 12:36:57.771454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.771499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:27.934 [2024-12-05 12:36:57.771510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:27.934 [2024-12-05 12:36:57.771519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:27.934 [2024-12-05 12:36:57.771528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.864043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:27.934 [2024-12-05 12:36:57.864108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:27.934 [2024-12-05 12:36:57.864132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:27.934 [2024-12-05 12:36:57.864141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.939509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:27.934 [2024-12-05 12:36:57.939574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:27.934 [2024-12-05 12:36:57.939589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:27.934 [2024-12-05 12:36:57.939599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.939736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:27.934 [2024-12-05 12:36:57.939748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:27.934 [2024-12-05 12:36:57.939760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:27.934 [2024-12-05 12:36:57.939775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.939827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:27.934 [2024-12-05 12:36:57.939838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:27.934 [2024-12-05 12:36:57.939848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:27.934 [2024-12-05 12:36:57.939856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.939967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:27.934 [2024-12-05 12:36:57.939981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:27.934 [2024-12-05 12:36:57.939991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:27.934 [2024-12-05 12:36:57.940000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.940041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:27.934 [2024-12-05 12:36:57.940051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:27.934 [2024-12-05 12:36:57.940060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:27.934 [2024-12-05 12:36:57.940070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.940127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:27.934 [2024-12-05 12:36:57.940139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:27.934 [2024-12-05 12:36:57.940149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:27.934 [2024-12-05 12:36:57.940159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.940224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:27.934 [2024-12-05 12:36:57.940248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:27.934 [2024-12-05 12:36:57.940258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:27.934 [2024-12-05 12:36:57.940268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:27.934 [2024-12-05 12:36:57.940439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9372.538 ms, result 0 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84815 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84815 00:33:32.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84815 ']' 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.143 12:37:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:32.143 [2024-12-05 12:37:02.413264] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:33:32.143 [2024-12-05 12:37:02.413738] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84815 ] 00:33:32.143 [2024-12-05 12:37:02.582584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.143 [2024-12-05 12:37:02.736210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.087 [2024-12-05 12:37:03.631514] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:33.087 [2024-12-05 12:37:03.631883] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:33.087 [2024-12-05 12:37:03.785964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.087 [2024-12-05 12:37:03.786238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:33.087 [2024-12-05 12:37:03.786267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:33.087 [2024-12-05 12:37:03.786279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.087 [2024-12-05 12:37:03.786366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.087 [2024-12-05 12:37:03.786378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:33.087 [2024-12-05 12:37:03.786387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:33:33.087 [2024-12-05 12:37:03.786395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.087 [2024-12-05 12:37:03.786426] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:33.087 [2024-12-05 12:37:03.787190] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:33.087 [2024-12-05 12:37:03.787211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.087 [2024-12-05 12:37:03.787222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:33.087 [2024-12-05 12:37:03.787232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.797 ms 00:33:33.087 [2024-12-05 12:37:03.787241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.087 [2024-12-05 12:37:03.789537] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:33.088 [2024-12-05 12:37:03.804679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.088 [2024-12-05 12:37:03.804868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:33.088 [2024-12-05 12:37:03.804899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.143 ms 00:33:33.088 [2024-12-05 12:37:03.804910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.088 [2024-12-05 12:37:03.805334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.088 [2024-12-05 12:37:03.805371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:33.088 [2024-12-05 12:37:03.805394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:33:33.088 [2024-12-05 12:37:03.805405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.088 [2024-12-05 12:37:03.817084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.088 [2024-12-05 12:37:03.817133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:33.088 [2024-12-05 12:37:03.817146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.543 ms 00:33:33.088 [2024-12-05 12:37:03.817155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.088 [2024-12-05 12:37:03.817233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.088 [2024-12-05 12:37:03.817244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:33.088 [2024-12-05 12:37:03.817253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:33:33.088 [2024-12-05 12:37:03.817262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.088 [2024-12-05 12:37:03.817330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.088 [2024-12-05 12:37:03.817346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:33.088 [2024-12-05 12:37:03.817355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:33.088 [2024-12-05 12:37:03.817364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.088 [2024-12-05 12:37:03.817392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:33.088 [2024-12-05 12:37:03.821990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.088 [2024-12-05 12:37:03.822033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:33.088 [2024-12-05 12:37:03.822045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.606 ms 00:33:33.088 [2024-12-05 12:37:03.822058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.088 [2024-12-05 12:37:03.822089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.088 [2024-12-05 12:37:03.822099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:33.088 [2024-12-05 12:37:03.822108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:33.088 [2024-12-05 12:37:03.822117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.088 [2024-12-05 12:37:03.822161] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:33.088 [2024-12-05 12:37:03.822323] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:33.088 [2024-12-05 12:37:03.822365] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:33.088 [2024-12-05 12:37:03.822382] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:33.088 [2024-12-05 12:37:03.822523] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:33.088 [2024-12-05 12:37:03.822536] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:33.088 [2024-12-05 12:37:03.822548] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:33.088 [2024-12-05 12:37:03.822559] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:33.088 [2024-12-05 12:37:03.822570] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:33.088 [2024-12-05 12:37:03.822583] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:33.088 [2024-12-05 12:37:03.822592] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:33.088 [2024-12-05 12:37:03.822600] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:33.088 [2024-12-05 12:37:03.822608] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:33.088 [2024-12-05 12:37:03.822616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.088 [2024-12-05 12:37:03.822624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:33.088 [2024-12-05 12:37:03.822632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.459 ms 00:33:33.088 [2024-12-05 12:37:03.822641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.088 [2024-12-05 12:37:03.822728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.088 [2024-12-05 12:37:03.822736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:33.088 [2024-12-05 12:37:03.822747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:33:33.088 [2024-12-05 12:37:03.822754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.088 [2024-12-05 12:37:03.822863] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:33.088 [2024-12-05 12:37:03.822874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:33.088 [2024-12-05 12:37:03.822882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:33.088 [2024-12-05 12:37:03.822891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:33.088 [2024-12-05 12:37:03.822899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:33.088 [2024-12-05 12:37:03.822906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:33.088 [2024-12-05 12:37:03.822913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:33.088 [2024-12-05 12:37:03.822921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:33.088 [2024-12-05 12:37:03.822930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:33.088 [2024-12-05 12:37:03.822938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:33.088 [2024-12-05 12:37:03.822945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:33.088 [2024-12-05 12:37:03.822954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:33.088 [2024-12-05 12:37:03.822961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:33.088 [2024-12-05 12:37:03.822968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:33.088 [2024-12-05 12:37:03.822979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:33.088 [2024-12-05 12:37:03.822986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:33.088 [2024-12-05 12:37:03.822994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:33.088 [2024-12-05 12:37:03.823001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:33.088 [2024-12-05 12:37:03.823009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:33.088 [2024-12-05 12:37:03.823017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:33.088 [2024-12-05 12:37:03.823024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:33.088 [2024-12-05 12:37:03.823031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:33.088 [2024-12-05 12:37:03.823039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:33.088 [2024-12-05 12:37:03.823053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:33.088 [2024-12-05 12:37:03.823060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:33.088 [2024-12-05 12:37:03.823066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:33.088 [2024-12-05 12:37:03.823074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:33.088 [2024-12-05 12:37:03.823080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:33.088 [2024-12-05 12:37:03.823087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:33.088 [2024-12-05 12:37:03.823094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:33.089 [2024-12-05 12:37:03.823101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:33.089 [2024-12-05 12:37:03.823107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:33.089 [2024-12-05 12:37:03.823114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:33.089 [2024-12-05 12:37:03.823120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:33.089 [2024-12-05 12:37:03.823127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:33.089 [2024-12-05 12:37:03.823134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:33.089 [2024-12-05 12:37:03.823140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:33.089 [2024-12-05 12:37:03.823147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:33.089 [2024-12-05 12:37:03.823154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:33.089 [2024-12-05 12:37:03.823160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:33.089 [2024-12-05 12:37:03.823167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:33.089 [2024-12-05 12:37:03.823173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:33.089 [2024-12-05 12:37:03.823180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:33.089 [2024-12-05 12:37:03.823188] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:33.089 [2024-12-05 12:37:03.823197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:33.089 [2024-12-05 12:37:03.823204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:33.089 [2024-12-05 12:37:03.823213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:33.089 [2024-12-05 12:37:03.823225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:33.089 [2024-12-05 12:37:03.823232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:33.089 [2024-12-05 12:37:03.823239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:33.089 [2024-12-05 12:37:03.823246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:33.089 [2024-12-05 12:37:03.823253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:33.089 [2024-12-05 12:37:03.823259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:33.089 [2024-12-05 12:37:03.823267] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:33.089 [2024-12-05 12:37:03.823278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:33.089 [2024-12-05 12:37:03.823286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:33.089 [2024-12-05 12:37:03.823294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:33.089 [2024-12-05 12:37:03.823301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:33.089 [2024-12-05 12:37:03.823308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:33.089 [2024-12-05 12:37:03.823315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:33.089 [2024-12-05 12:37:03.823323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:33.089 [2024-12-05 12:37:03.823330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:33.089 [2024-12-05 12:37:03.823337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:33.089 [2024-12-05 12:37:03.823344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:33.089 [2024-12-05 12:37:03.823351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:33.089 [2024-12-05 12:37:03.823358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:33.089 [2024-12-05 12:37:03.823365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:33.089 [2024-12-05 12:37:03.823372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:33.089 [2024-12-05 12:37:03.823379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:33.089 [2024-12-05 12:37:03.823386] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:33.089 [2024-12-05 12:37:03.823395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:33.089 [2024-12-05 12:37:03.823403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:33.089 [2024-12-05 12:37:03.823411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:33.089 [2024-12-05 12:37:03.823419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:33.089 [2024-12-05 12:37:03.823426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:33.089 [2024-12-05 12:37:03.823435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:33.089 [2024-12-05 12:37:03.823443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:33.089 [2024-12-05 12:37:03.823452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.642 ms 00:33:33.089 [2024-12-05 12:37:03.823474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:33.089 [2024-12-05 12:37:03.823524] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:33.089 [2024-12-05 12:37:03.823536] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:37.368 [2024-12-05 12:37:08.102007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.368 [2024-12-05 12:37:08.102344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:37.368 [2024-12-05 12:37:08.102431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4278.467 ms 00:33:37.368 [2024-12-05 12:37:08.102457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.368 [2024-12-05 12:37:08.134298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.368 [2024-12-05 12:37:08.134568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:37.368 [2024-12-05 12:37:08.134994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.530 ms 00:33:37.368 [2024-12-05 12:37:08.135292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.368 [2024-12-05 12:37:08.135656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.368 [2024-12-05 12:37:08.135794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:37.368 [2024-12-05 12:37:08.135996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:33:37.368 [2024-12-05 12:37:08.136071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.368 [2024-12-05 12:37:08.175624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.368 [2024-12-05 12:37:08.175827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:37.368 [2024-12-05 12:37:08.176055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.382 ms 00:33:37.368 [2024-12-05 12:37:08.176097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.368 [2024-12-05 12:37:08.176156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.368 [2024-12-05 12:37:08.176180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:37.368 [2024-12-05 12:37:08.176202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:37.368 [2024-12-05 12:37:08.176222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.368 [2024-12-05 12:37:08.176889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.368 [2024-12-05 12:37:08.177049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:37.368 [2024-12-05 12:37:08.177111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.599 ms 00:33:37.368 [2024-12-05 12:37:08.177134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.368 [2024-12-05 12:37:08.177212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.368 [2024-12-05 12:37:08.177236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:37.368 [2024-12-05 12:37:08.177258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:33:37.368 [2024-12-05 12:37:08.177277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.368 [2024-12-05 12:37:08.194912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.368 [2024-12-05 12:37:08.195082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:37.368 [2024-12-05 12:37:08.195155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.602 ms 00:33:37.368 [2024-12-05 12:37:08.195178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.368 [2024-12-05 12:37:08.219727] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:37.368 [2024-12-05 12:37:08.219961] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:37.368 [2024-12-05 12:37:08.219988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.368 [2024-12-05 12:37:08.220001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:33:37.368 [2024-12-05 12:37:08.220014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.648 ms 00:33:37.368 [2024-12-05 12:37:08.220025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.368 [2024-12-05 12:37:08.235342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.631 [2024-12-05 12:37:08.235553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:33:37.631 [2024-12-05 12:37:08.235576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.258 ms 00:33:37.631 [2024-12-05 12:37:08.235586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.631 [2024-12-05 12:37:08.248846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.631 [2024-12-05 12:37:08.248898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:33:37.631 [2024-12-05 12:37:08.248910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.118 ms 00:33:37.631 [2024-12-05 12:37:08.248918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.631 [2024-12-05 12:37:08.261440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.631 [2024-12-05 12:37:08.261507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:33:37.631 [2024-12-05 12:37:08.261520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.468 ms 00:33:37.631 [2024-12-05 12:37:08.261528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.631 [2024-12-05 12:37:08.262203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.631 [2024-12-05 12:37:08.262239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:37.631 [2024-12-05 12:37:08.262250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.552 ms 00:33:37.631 [2024-12-05 12:37:08.262259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.631 [2024-12-05 12:37:08.328477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.631 [2024-12-05 12:37:08.328554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:37.631 [2024-12-05 12:37:08.328571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 66.192 ms 00:33:37.631 [2024-12-05 12:37:08.328581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.631 [2024-12-05 12:37:08.340196] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:37.631 [2024-12-05 12:37:08.341456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.631 [2024-12-05 12:37:08.341526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:37.631 [2024-12-05 12:37:08.341540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.778 ms 00:33:37.631 [2024-12-05 12:37:08.341549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.631 [2024-12-05 12:37:08.341669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.631 [2024-12-05 12:37:08.341684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:33:37.631 [2024-12-05 12:37:08.341694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:33:37.631 [2024-12-05 12:37:08.341703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.631 [2024-12-05 12:37:08.341770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.631 [2024-12-05 12:37:08.341781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:37.631 [2024-12-05 12:37:08.341790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:33:37.631 [2024-12-05 12:37:08.341799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.631 [2024-12-05 12:37:08.341822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.631 [2024-12-05 12:37:08.341832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:37.631 [2024-12-05 12:37:08.341844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:37.631 [2024-12-05 12:37:08.341852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.631 [2024-12-05 12:37:08.341889] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:37.631 [2024-12-05 12:37:08.341901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.631 [2024-12-05 12:37:08.341909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:37.631 [2024-12-05 12:37:08.341918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:37.631 [2024-12-05 12:37:08.341926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.631 [2024-12-05 12:37:08.367722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.631 [2024-12-05 12:37:08.367783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:37.631 [2024-12-05 12:37:08.367796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.774 ms 00:33:37.631 [2024-12-05 12:37:08.367805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.631 [2024-12-05 12:37:08.367904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.631 [2024-12-05 12:37:08.367914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:37.631 [2024-12-05 12:37:08.367924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:33:37.631 [2024-12-05 12:37:08.367932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.631 [2024-12-05 12:37:08.369248] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4582.768 ms, result 0 00:33:37.631 [2024-12-05 12:37:08.384162] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.631 [2024-12-05 12:37:08.400166] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:37.631 [2024-12-05 12:37:08.408530] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:37.631 12:37:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:37.631 12:37:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:37.631 12:37:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:37.631 12:37:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:37.631 12:37:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:37.893 [2024-12-05 12:37:08.648509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.893 [2024-12-05 12:37:08.648569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:37.893 [2024-12-05 12:37:08.648588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:37.893 [2024-12-05 12:37:08.648597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.893 [2024-12-05 12:37:08.648637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.893 [2024-12-05 12:37:08.648648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:37.893 [2024-12-05 12:37:08.648657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:37.893 [2024-12-05 12:37:08.648665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.893 [2024-12-05 12:37:08.648686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:37.893 [2024-12-05 12:37:08.648696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:37.893 [2024-12-05 12:37:08.648705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:37.893 [2024-12-05 12:37:08.648712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:37.893 [2024-12-05 12:37:08.648780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.275 ms, result 0 00:33:37.893 true 00:33:37.893 12:37:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:38.155 { 00:33:38.155 "name": "ftl", 00:33:38.156 "properties": [ 00:33:38.156 { 00:33:38.156 "name": "superblock_version", 00:33:38.156 "value": 5, 00:33:38.156 "read-only": true 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "name": "base_device", 00:33:38.156 "bands": [ 00:33:38.156 { 00:33:38.156 "id": 0, 00:33:38.156 "state": "CLOSED", 00:33:38.156 "validity": 1.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 1, 00:33:38.156 "state": "CLOSED", 00:33:38.156 "validity": 1.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 2, 00:33:38.156 "state": "CLOSED", 00:33:38.156 "validity": 0.007843137254901933 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 3, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 4, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 5, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 6, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 7, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 8, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 9, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 10, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 11, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 12, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 13, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 14, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 15, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 16, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 17, 00:33:38.156 "state": "FREE", 00:33:38.156 "validity": 0.0 00:33:38.156 } 00:33:38.156 ], 00:33:38.156 "read-only": true 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "name": "cache_device", 00:33:38.156 "type": "bdev", 00:33:38.156 "chunks": [ 00:33:38.156 { 00:33:38.156 "id": 0, 00:33:38.156 "state": "INACTIVE", 00:33:38.156 "utilization": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 1, 00:33:38.156 "state": "OPEN", 00:33:38.156 "utilization": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 2, 00:33:38.156 "state": "OPEN", 00:33:38.156 "utilization": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 3, 00:33:38.156 "state": "FREE", 00:33:38.156 "utilization": 0.0 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "id": 4, 00:33:38.156 "state": "FREE", 00:33:38.156 "utilization": 0.0 00:33:38.156 } 00:33:38.156 ], 00:33:38.156 "read-only": true 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "name": "verbose_mode", 00:33:38.156 "value": true, 00:33:38.156 "unit": "", 00:33:38.156 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:38.156 }, 00:33:38.156 { 00:33:38.156 "name": "prep_upgrade_on_shutdown", 00:33:38.156 "value": false, 00:33:38.156 "unit": "", 00:33:38.156 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:38.156 } 00:33:38.156 ] 00:33:38.156 } 00:33:38.156 12:37:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:33:38.156 12:37:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:38.156 12:37:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:38.418 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:33:38.418 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:33:38.418 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:33:38.418 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:33:38.418 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:38.680 Validate MD5 checksum, iteration 1 00:33:38.680 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:33:38.680 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:33:38.680 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:33:38.680 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:38.680 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:38.680 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:38.680 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:38.680 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:38.680 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:38.680 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:38.680 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:38.680 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:38.680 12:37:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:38.680 [2024-12-05 12:37:09.426266] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:33:38.680 [2024-12-05 12:37:09.426416] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84910 ] 00:33:38.942 [2024-12-05 12:37:09.589644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.942 [2024-12-05 12:37:09.712197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.860  [2024-12-05T12:37:12.302Z] Copying: 520/1024 [MB] (520 MBps) [2024-12-05T12:37:13.716Z] Copying: 1024/1024 [MB] (average 516 MBps) 00:33:42.847 00:33:42.847 12:37:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:42.847 12:37:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:44.762 12:37:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:44.762 Validate MD5 checksum, iteration 2 00:33:44.762 12:37:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e291f2a5866140c8f49d65676dc85f11 00:33:44.762 12:37:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e291f2a5866140c8f49d65676dc85f11 != \e\2\9\1\f\2\a\5\8\6\6\1\4\0\c\8\f\4\9\d\6\5\6\7\6\d\c\8\5\f\1\1 ]] 00:33:44.762 12:37:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:44.762 12:37:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:44.762 12:37:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:44.762 12:37:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:44.762 12:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:44.762 12:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:44.762 12:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:44.762 12:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:44.762 12:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:44.762 [2024-12-05 12:37:15.491975] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:33:44.762 [2024-12-05 12:37:15.492065] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84977 ] 00:33:45.023 [2024-12-05 12:37:15.639320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.023 [2024-12-05 12:37:15.716156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.406  [2024-12-05T12:37:17.845Z] Copying: 620/1024 [MB] (620 MBps) [2024-12-05T12:37:21.152Z] Copying: 1024/1024 [MB] (average 627 MBps) 00:33:50.283 00:33:50.283 12:37:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:50.283 12:37:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b8134c0647c01a94499f2f8a230241b9 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b8134c0647c01a94499f2f8a230241b9 != \b\8\1\3\4\c\0\6\4\7\c\0\1\a\9\4\4\9\9\f\2\f\8\a\2\3\0\2\4\1\b\9 ]] 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84815 ]] 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84815 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85051 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85051 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85051 ']' 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:51.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:51.668 12:37:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:51.668 [2024-12-05 12:37:22.326768] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:33:51.668 [2024-12-05 12:37:22.326886] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85051 ] 00:33:51.668 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84815 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:33:51.668 [2024-12-05 12:37:22.484250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.929 [2024-12-05 12:37:22.610905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.871 [2024-12-05 12:37:23.477150] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:52.871 [2024-12-05 12:37:23.477239] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:52.871 [2024-12-05 12:37:23.625812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.871 [2024-12-05 12:37:23.625876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:52.871 [2024-12-05 12:37:23.625891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:52.871 [2024-12-05 12:37:23.625900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.871 [2024-12-05 12:37:23.625958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.871 [2024-12-05 12:37:23.625968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:52.871 [2024-12-05 12:37:23.625977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:33:52.871 [2024-12-05 12:37:23.625985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.871 [2024-12-05 12:37:23.626011] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:52.871 [2024-12-05 12:37:23.626738] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:52.871 [2024-12-05 12:37:23.626756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.871 [2024-12-05 12:37:23.626765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:52.871 [2024-12-05 12:37:23.626774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.754 ms 00:33:52.871 [2024-12-05 12:37:23.626783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.871 [2024-12-05 12:37:23.627096] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:52.871 [2024-12-05 12:37:23.643743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.871 [2024-12-05 12:37:23.643905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:52.871 [2024-12-05 12:37:23.643925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.646 ms 00:33:52.871 [2024-12-05 12:37:23.643934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.871 [2024-12-05 12:37:23.653316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.871 [2024-12-05 12:37:23.653369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:52.871 [2024-12-05 12:37:23.653380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:33:52.871 [2024-12-05 12:37:23.653387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.871 [2024-12-05 12:37:23.653736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.871 [2024-12-05 12:37:23.653755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:52.871 [2024-12-05 12:37:23.653765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.248 ms 00:33:52.871 [2024-12-05 12:37:23.653772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.871 [2024-12-05 12:37:23.653826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.871 [2024-12-05 12:37:23.653842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:52.871 [2024-12-05 12:37:23.653850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:33:52.871 [2024-12-05 12:37:23.653858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.871 [2024-12-05 12:37:23.653884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.871 [2024-12-05 12:37:23.653893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:52.871 [2024-12-05 12:37:23.653901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:52.871 [2024-12-05 12:37:23.653909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.871 [2024-12-05 12:37:23.653930] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:52.871 [2024-12-05 12:37:23.656972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.871 [2024-12-05 12:37:23.657000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:52.871 [2024-12-05 12:37:23.657009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.047 ms 00:33:52.871 [2024-12-05 12:37:23.657017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.871 [2024-12-05 12:37:23.657049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.871 [2024-12-05 12:37:23.657058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:52.871 [2024-12-05 12:37:23.657067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:52.871 [2024-12-05 12:37:23.657074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.871 [2024-12-05 12:37:23.657096] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:52.871 [2024-12-05 12:37:23.657116] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:52.871 [2024-12-05 12:37:23.657151] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:52.871 [2024-12-05 12:37:23.657169] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:52.871 [2024-12-05 12:37:23.657275] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:52.871 [2024-12-05 12:37:23.657286] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:52.871 [2024-12-05 12:37:23.657297] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:52.871 [2024-12-05 12:37:23.657307] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:52.871 [2024-12-05 12:37:23.657317] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:52.871 [2024-12-05 12:37:23.657325] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:52.871 [2024-12-05 12:37:23.657332] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:52.871 [2024-12-05 12:37:23.657340] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:52.871 [2024-12-05 12:37:23.657348] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:52.871 [2024-12-05 12:37:23.657358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.871 [2024-12-05 12:37:23.657365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:52.871 [2024-12-05 12:37:23.657373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.265 ms 00:33:52.871 [2024-12-05 12:37:23.657380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.871 [2024-12-05 12:37:23.657483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.871 [2024-12-05 12:37:23.657492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:52.871 [2024-12-05 12:37:23.657500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.087 ms 00:33:52.871 [2024-12-05 12:37:23.657507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.871 [2024-12-05 12:37:23.657621] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:52.871 [2024-12-05 12:37:23.657634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:52.871 [2024-12-05 12:37:23.657643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:52.871 [2024-12-05 12:37:23.657652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:52.871 [2024-12-05 12:37:23.657659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:52.871 [2024-12-05 12:37:23.657666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:52.871 [2024-12-05 12:37:23.657673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:52.871 [2024-12-05 12:37:23.657680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:52.871 [2024-12-05 12:37:23.657688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:52.871 [2024-12-05 12:37:23.657694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:52.871 [2024-12-05 12:37:23.657702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:52.871 [2024-12-05 12:37:23.657709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:52.871 [2024-12-05 12:37:23.657716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:52.872 [2024-12-05 12:37:23.657722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:52.872 [2024-12-05 12:37:23.657729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:52.872 [2024-12-05 12:37:23.657735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:52.872 [2024-12-05 12:37:23.657742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:52.872 [2024-12-05 12:37:23.657749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:52.872 [2024-12-05 12:37:23.657756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:52.872 [2024-12-05 12:37:23.657764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:52.872 [2024-12-05 12:37:23.657771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:52.872 [2024-12-05 12:37:23.657783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:52.872 [2024-12-05 12:37:23.657790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:52.872 [2024-12-05 12:37:23.657797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:52.872 [2024-12-05 12:37:23.657803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:52.872 [2024-12-05 12:37:23.657810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:52.872 [2024-12-05 12:37:23.657817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:52.872 [2024-12-05 12:37:23.657823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:52.872 [2024-12-05 12:37:23.657830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:52.872 [2024-12-05 12:37:23.657836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:52.872 [2024-12-05 12:37:23.657843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:52.872 [2024-12-05 12:37:23.657850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:52.872 [2024-12-05 12:37:23.657857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:52.872 [2024-12-05 12:37:23.657863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:52.872 [2024-12-05 12:37:23.657869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:52.872 [2024-12-05 12:37:23.657876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:52.872 [2024-12-05 12:37:23.657882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:52.872 [2024-12-05 12:37:23.657888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:52.872 [2024-12-05 12:37:23.657895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:52.872 [2024-12-05 12:37:23.657901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:52.872 [2024-12-05 12:37:23.657908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:52.872 [2024-12-05 12:37:23.657914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:52.872 [2024-12-05 12:37:23.657921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:52.872 [2024-12-05 12:37:23.657927] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:52.872 [2024-12-05 12:37:23.657936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:52.872 [2024-12-05 12:37:23.657943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:52.872 [2024-12-05 12:37:23.657950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:52.872 [2024-12-05 12:37:23.657958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:52.872 [2024-12-05 12:37:23.657965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:52.872 [2024-12-05 12:37:23.657972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:52.872 [2024-12-05 12:37:23.657980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:52.872 [2024-12-05 12:37:23.657989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:52.872 [2024-12-05 12:37:23.657996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:52.872 [2024-12-05 12:37:23.658004] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:52.872 [2024-12-05 12:37:23.658014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:52.872 [2024-12-05 12:37:23.658022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:52.872 [2024-12-05 12:37:23.658030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:52.872 [2024-12-05 12:37:23.658037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:52.872 [2024-12-05 12:37:23.658044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:52.872 [2024-12-05 12:37:23.658051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:52.872 [2024-12-05 12:37:23.658058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:52.872 [2024-12-05 12:37:23.658065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:52.872 [2024-12-05 12:37:23.658073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:52.872 [2024-12-05 12:37:23.658080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:52.872 [2024-12-05 12:37:23.658087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:52.872 [2024-12-05 12:37:23.658094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:52.872 [2024-12-05 12:37:23.658101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:52.872 [2024-12-05 12:37:23.658108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:52.872 [2024-12-05 12:37:23.658115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:52.872 [2024-12-05 12:37:23.658122] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:52.872 [2024-12-05 12:37:23.658131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:52.872 [2024-12-05 12:37:23.658141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:52.872 [2024-12-05 12:37:23.658150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:52.872 [2024-12-05 12:37:23.658157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:52.872 [2024-12-05 12:37:23.658165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:52.872 [2024-12-05 12:37:23.658172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.872 [2024-12-05 12:37:23.658179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:52.872 [2024-12-05 12:37:23.658187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.620 ms 00:33:52.872 [2024-12-05 12:37:23.658194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.872 [2024-12-05 12:37:23.684809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.872 [2024-12-05 12:37:23.684851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:52.872 [2024-12-05 12:37:23.684863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.565 ms 00:33:52.872 [2024-12-05 12:37:23.684871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.872 [2024-12-05 12:37:23.684921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.872 [2024-12-05 12:37:23.684930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:52.872 [2024-12-05 12:37:23.684938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:33:52.872 [2024-12-05 12:37:23.684946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.872 [2024-12-05 12:37:23.717919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.872 [2024-12-05 12:37:23.717965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:52.872 [2024-12-05 12:37:23.717979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.911 ms 00:33:52.872 [2024-12-05 12:37:23.717987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.872 [2024-12-05 12:37:23.718036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.872 [2024-12-05 12:37:23.718045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:52.872 [2024-12-05 12:37:23.718054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:52.872 [2024-12-05 12:37:23.718065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.873 [2024-12-05 12:37:23.718183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.873 [2024-12-05 12:37:23.718194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:52.873 [2024-12-05 12:37:23.718203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:33:52.873 [2024-12-05 12:37:23.718211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.873 [2024-12-05 12:37:23.718256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.873 [2024-12-05 12:37:23.718265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:52.873 [2024-12-05 12:37:23.718273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:33:52.873 [2024-12-05 12:37:23.718281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.873 [2024-12-05 12:37:23.733923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.873 [2024-12-05 12:37:23.734083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:52.873 [2024-12-05 12:37:23.734100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.618 ms 00:33:52.873 [2024-12-05 12:37:23.734114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:52.873 [2024-12-05 12:37:23.734240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:52.873 [2024-12-05 12:37:23.734252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:33:52.873 [2024-12-05 12:37:23.734260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:52.873 [2024-12-05 12:37:23.734268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.135 [2024-12-05 12:37:23.772488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.135 [2024-12-05 12:37:23.772538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:33:53.135 [2024-12-05 12:37:23.772553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.199 ms 00:33:53.135 [2024-12-05 12:37:23.772562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.135 [2024-12-05 12:37:23.782075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.135 [2024-12-05 12:37:23.782206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:53.135 [2024-12-05 12:37:23.782235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.535 ms 00:33:53.135 [2024-12-05 12:37:23.782244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.135 [2024-12-05 12:37:23.840934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.135 [2024-12-05 12:37:23.841134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:53.135 [2024-12-05 12:37:23.841155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 58.623 ms 00:33:53.135 [2024-12-05 12:37:23.841164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.135 [2024-12-05 12:37:23.841541] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:33:53.135 [2024-12-05 12:37:23.841674] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:33:53.135 [2024-12-05 12:37:23.841784] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:33:53.135 [2024-12-05 12:37:23.841894] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:33:53.135 [2024-12-05 12:37:23.841912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.135 [2024-12-05 12:37:23.841922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:33:53.135 [2024-12-05 12:37:23.841933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.502 ms 00:33:53.135 [2024-12-05 12:37:23.841942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.135 [2024-12-05 12:37:23.842048] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:33:53.135 [2024-12-05 12:37:23.842062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.135 [2024-12-05 12:37:23.842072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:33:53.135 [2024-12-05 12:37:23.842081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:33:53.135 [2024-12-05 12:37:23.842089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.135 [2024-12-05 12:37:23.857568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.135 [2024-12-05 12:37:23.857613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:33:53.135 [2024-12-05 12:37:23.857625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.457 ms 00:33:53.135 [2024-12-05 12:37:23.857634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.135 [2024-12-05 12:37:23.866246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.135 [2024-12-05 12:37:23.866279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:33:53.135 [2024-12-05 12:37:23.866289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:53.135 [2024-12-05 12:37:23.866297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.135 [2024-12-05 12:37:23.866392] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:33:53.135 [2024-12-05 12:37:23.866579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.135 [2024-12-05 12:37:23.866593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:53.135 [2024-12-05 12:37:23.866602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:33:53.135 [2024-12-05 12:37:23.866610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.704 [2024-12-05 12:37:24.493935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.705 [2024-12-05 12:37:24.494034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:53.705 [2024-12-05 12:37:24.494052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 626.525 ms 00:33:53.705 [2024-12-05 12:37:24.494062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.705 [2024-12-05 12:37:24.498734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.705 [2024-12-05 12:37:24.498775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:53.705 [2024-12-05 12:37:24.498786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.495 ms 00:33:53.705 [2024-12-05 12:37:24.498796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.705 [2024-12-05 12:37:24.499799] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:33:53.705 [2024-12-05 12:37:24.499872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.705 [2024-12-05 12:37:24.499884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:53.705 [2024-12-05 12:37:24.499894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.041 ms 00:33:53.705 [2024-12-05 12:37:24.499903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.705 [2024-12-05 12:37:24.499937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.705 [2024-12-05 12:37:24.499947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:53.705 [2024-12-05 12:37:24.499956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:53.705 [2024-12-05 12:37:24.499970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:53.705 [2024-12-05 12:37:24.500007] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 633.611 ms, result 0 00:33:53.705 [2024-12-05 12:37:24.500047] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:33:53.705 [2024-12-05 12:37:24.500241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:53.705 [2024-12-05 12:37:24.500252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:53.705 [2024-12-05 12:37:24.500261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:33:53.705 [2024-12-05 12:37:24.500269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.274 [2024-12-05 12:37:25.031158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.274 [2024-12-05 12:37:25.031370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:54.274 [2024-12-05 12:37:25.031409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 529.849 ms 00:33:54.274 [2024-12-05 12:37:25.031418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.274 [2024-12-05 12:37:25.035809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.274 [2024-12-05 12:37:25.035845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:54.274 [2024-12-05 12:37:25.035856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.148 ms 00:33:54.274 [2024-12-05 12:37:25.035864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.274 [2024-12-05 12:37:25.036260] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:33:54.274 [2024-12-05 12:37:25.036313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.274 [2024-12-05 12:37:25.036322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:54.274 [2024-12-05 12:37:25.036331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.422 ms 00:33:54.274 [2024-12-05 12:37:25.036339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.274 [2024-12-05 12:37:25.036367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.274 [2024-12-05 12:37:25.036377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:54.274 [2024-12-05 12:37:25.036386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:54.274 [2024-12-05 12:37:25.036394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.274 [2024-12-05 12:37:25.036431] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 536.378 ms, result 0 00:33:54.274 [2024-12-05 12:37:25.036491] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:54.274 [2024-12-05 12:37:25.036502] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:54.274 [2024-12-05 12:37:25.036513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.274 [2024-12-05 12:37:25.036522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:33:54.274 [2024-12-05 12:37:25.036530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1170.137 ms 00:33:54.274 [2024-12-05 12:37:25.036537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.274 [2024-12-05 12:37:25.036567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.274 [2024-12-05 12:37:25.036580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:33:54.274 [2024-12-05 12:37:25.036588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:54.274 [2024-12-05 12:37:25.036605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.274 [2024-12-05 12:37:25.047901] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:54.274 [2024-12-05 12:37:25.048020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.274 [2024-12-05 12:37:25.048031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:54.274 [2024-12-05 12:37:25.048042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.399 ms 00:33:54.274 [2024-12-05 12:37:25.048051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.274 [2024-12-05 12:37:25.048801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.274 [2024-12-05 12:37:25.048914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:33:54.274 [2024-12-05 12:37:25.048933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.660 ms 00:33:54.274 [2024-12-05 12:37:25.048942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.274 [2024-12-05 12:37:25.051168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.274 [2024-12-05 12:37:25.051191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:33:54.274 [2024-12-05 12:37:25.051201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.205 ms 00:33:54.274 [2024-12-05 12:37:25.051210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.274 [2024-12-05 12:37:25.051252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.274 [2024-12-05 12:37:25.051262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:33:54.274 [2024-12-05 12:37:25.051270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:54.274 [2024-12-05 12:37:25.051282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.274 [2024-12-05 12:37:25.051393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.274 [2024-12-05 12:37:25.051403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:54.274 [2024-12-05 12:37:25.051412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:33:54.274 [2024-12-05 12:37:25.051419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.274 [2024-12-05 12:37:25.051441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.274 [2024-12-05 12:37:25.051449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:54.275 [2024-12-05 12:37:25.051457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:54.275 [2024-12-05 12:37:25.051480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.275 [2024-12-05 12:37:25.051514] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:54.275 [2024-12-05 12:37:25.051524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.275 [2024-12-05 12:37:25.051532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:54.275 [2024-12-05 12:37:25.051540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:54.275 [2024-12-05 12:37:25.051547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.275 [2024-12-05 12:37:25.051601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.275 [2024-12-05 12:37:25.051610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:54.275 [2024-12-05 12:37:25.051619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:33:54.275 [2024-12-05 12:37:25.051626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.275 [2024-12-05 12:37:25.052703] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1426.380 ms, result 0 00:33:54.275 [2024-12-05 12:37:25.068367] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.275 [2024-12-05 12:37:25.084368] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:54.275 [2024-12-05 12:37:25.093177] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:54.275 Validate MD5 checksum, iteration 1 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:54.275 12:37:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:54.535 [2024-12-05 12:37:25.195193] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:33:54.535 [2024-12-05 12:37:25.195449] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85091 ] 00:33:54.535 [2024-12-05 12:37:25.351376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.795 [2024-12-05 12:37:25.432393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.186  [2024-12-05T12:37:27.627Z] Copying: 615/1024 [MB] (615 MBps) [2024-12-05T12:37:28.569Z] Copying: 1024/1024 [MB] (average 614 MBps) 00:33:57.700 00:33:57.700 12:37:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:57.700 12:37:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:00.246 12:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:00.246 12:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e291f2a5866140c8f49d65676dc85f11 00:34:00.246 12:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e291f2a5866140c8f49d65676dc85f11 != \e\2\9\1\f\2\a\5\8\6\6\1\4\0\c\8\f\4\9\d\6\5\6\7\6\d\c\8\5\f\1\1 ]] 00:34:00.246 12:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:00.246 12:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:00.246 12:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:34:00.246 Validate MD5 checksum, iteration 2 00:34:00.246 12:37:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:00.246 12:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:00.246 12:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:00.246 12:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:00.246 12:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:00.246 12:37:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:00.246 [2024-12-05 12:37:30.744393] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:34:00.246 [2024-12-05 12:37:30.744800] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85147 ] 00:34:00.246 [2024-12-05 12:37:30.907072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.246 [2024-12-05 12:37:31.040808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.185  [2024-12-05T12:37:33.675Z] Copying: 516/1024 [MB] (516 MBps) [2024-12-05T12:37:33.676Z] Copying: 1002/1024 [MB] (486 MBps) [2024-12-05T12:37:35.062Z] Copying: 1024/1024 [MB] (average 502 MBps) 00:34:04.193 00:34:04.193 12:37:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:34:04.193 12:37:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:06.103 12:37:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:06.103 12:37:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b8134c0647c01a94499f2f8a230241b9 00:34:06.103 12:37:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b8134c0647c01a94499f2f8a230241b9 != \b\8\1\3\4\c\0\6\4\7\c\0\1\a\9\4\4\9\9\f\2\f\8\a\2\3\0\2\4\1\b\9 ]] 00:34:06.103 12:37:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:06.103 12:37:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:06.103 12:37:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:34:06.103 12:37:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:34:06.103 12:37:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:34:06.103 12:37:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85051 ]] 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85051 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85051 ']' 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85051 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85051 00:34:06.361 killing process with pid 85051 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85051' 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85051 00:34:06.361 12:37:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85051 00:34:06.927 [2024-12-05 12:37:37.679276] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:34:06.927 [2024-12-05 12:37:37.691815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.927 [2024-12-05 12:37:37.691855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:34:06.927 [2024-12-05 12:37:37.691868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:06.927 [2024-12-05 12:37:37.691875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.927 [2024-12-05 12:37:37.691895] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:34:06.927 [2024-12-05 12:37:37.694265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.927 [2024-12-05 12:37:37.694297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:34:06.927 [2024-12-05 12:37:37.694306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.358 ms 00:34:06.927 [2024-12-05 12:37:37.694313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.927 [2024-12-05 12:37:37.694513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.927 [2024-12-05 12:37:37.694524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:34:06.927 [2024-12-05 12:37:37.694531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.182 ms 00:34:06.927 [2024-12-05 12:37:37.694539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.927 [2024-12-05 12:37:37.695869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.927 [2024-12-05 12:37:37.695895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:34:06.927 [2024-12-05 12:37:37.695903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.318 ms 00:34:06.927 [2024-12-05 12:37:37.695913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.927 [2024-12-05 12:37:37.696829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.927 [2024-12-05 12:37:37.696849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:34:06.927 [2024-12-05 12:37:37.696857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.891 ms 00:34:06.927 [2024-12-05 12:37:37.696863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.927 [2024-12-05 12:37:37.704704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.927 [2024-12-05 12:37:37.704731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:34:06.927 [2024-12-05 12:37:37.704743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.814 ms 00:34:06.927 [2024-12-05 12:37:37.704750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.927 [2024-12-05 12:37:37.708998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.927 [2024-12-05 12:37:37.709025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:34:06.927 [2024-12-05 12:37:37.709033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.220 ms 00:34:06.927 [2024-12-05 12:37:37.709041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.927 [2024-12-05 12:37:37.709115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.927 [2024-12-05 12:37:37.709124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:34:06.927 [2024-12-05 12:37:37.709132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:34:06.927 [2024-12-05 12:37:37.709142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.927 [2024-12-05 12:37:37.716565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.927 [2024-12-05 12:37:37.716598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:34:06.927 [2024-12-05 12:37:37.716605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.410 ms 00:34:06.927 [2024-12-05 12:37:37.716611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.927 [2024-12-05 12:37:37.724113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.927 [2024-12-05 12:37:37.724139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:34:06.927 [2024-12-05 12:37:37.724146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.477 ms 00:34:06.927 [2024-12-05 12:37:37.724152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.927 [2024-12-05 12:37:37.731300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.927 [2024-12-05 12:37:37.731414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:34:06.927 [2024-12-05 12:37:37.731426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.123 ms 00:34:06.927 [2024-12-05 12:37:37.731433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.927 [2024-12-05 12:37:37.738662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.927 [2024-12-05 12:37:37.738687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:34:06.927 [2024-12-05 12:37:37.738695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.161 ms 00:34:06.927 [2024-12-05 12:37:37.738701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.927 [2024-12-05 12:37:37.738727] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:34:06.927 [2024-12-05 12:37:37.738740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:06.927 [2024-12-05 12:37:37.738748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:34:06.927 [2024-12-05 12:37:37.738756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:34:06.927 [2024-12-05 12:37:37.738763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:06.927 [2024-12-05 12:37:37.738772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:06.927 [2024-12-05 12:37:37.738778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:06.928 [2024-12-05 12:37:37.738786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:06.928 [2024-12-05 12:37:37.738792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:06.928 [2024-12-05 12:37:37.738798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:06.928 [2024-12-05 12:37:37.738804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:06.928 [2024-12-05 12:37:37.738810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:06.928 [2024-12-05 12:37:37.738816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:06.928 [2024-12-05 12:37:37.738822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:06.928 [2024-12-05 12:37:37.738828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:06.928 [2024-12-05 12:37:37.738834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:06.928 [2024-12-05 12:37:37.738840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:06.928 [2024-12-05 12:37:37.738846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:06.928 [2024-12-05 12:37:37.738852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:06.928 [2024-12-05 12:37:37.738859] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:34:06.928 [2024-12-05 12:37:37.738865] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c41d8d62-bbc6-4b39-9d35-fd5f270876d3 00:34:06.928 [2024-12-05 12:37:37.738872] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:34:06.928 [2024-12-05 12:37:37.738877] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:34:06.928 [2024-12-05 12:37:37.738883] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:34:06.928 [2024-12-05 12:37:37.738890] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:34:06.928 [2024-12-05 12:37:37.738896] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:34:06.928 [2024-12-05 12:37:37.738902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:34:06.928 [2024-12-05 12:37:37.738911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:34:06.928 [2024-12-05 12:37:37.738916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:34:06.928 [2024-12-05 12:37:37.738922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:34:06.928 [2024-12-05 12:37:37.738928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.928 [2024-12-05 12:37:37.738936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:34:06.928 [2024-12-05 12:37:37.738944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:34:06.928 [2024-12-05 12:37:37.738950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.928 [2024-12-05 12:37:37.749271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.928 [2024-12-05 12:37:37.749293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:34:06.928 [2024-12-05 12:37:37.749303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.308 ms 00:34:06.928 [2024-12-05 12:37:37.749310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.928 [2024-12-05 12:37:37.749632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.928 [2024-12-05 12:37:37.749639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:34:06.928 [2024-12-05 12:37:37.749646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.302 ms 00:34:06.928 [2024-12-05 12:37:37.749652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.928 [2024-12-05 12:37:37.785261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:06.928 [2024-12-05 12:37:37.785367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:06.928 [2024-12-05 12:37:37.785414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:06.928 [2024-12-05 12:37:37.785437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.928 [2024-12-05 12:37:37.785489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:06.928 [2024-12-05 12:37:37.785508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:06.928 [2024-12-05 12:37:37.785524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:06.928 [2024-12-05 12:37:37.785539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.928 [2024-12-05 12:37:37.785627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:06.928 [2024-12-05 12:37:37.785648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:06.928 [2024-12-05 12:37:37.785666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:06.928 [2024-12-05 12:37:37.785721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.928 [2024-12-05 12:37:37.785753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:06.928 [2024-12-05 12:37:37.785770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:06.928 [2024-12-05 12:37:37.785786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:06.928 [2024-12-05 12:37:37.785801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:07.186 [2024-12-05 12:37:37.851277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:07.186 [2024-12-05 12:37:37.851430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:07.186 [2024-12-05 12:37:37.851486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:07.186 [2024-12-05 12:37:37.851506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:07.186 [2024-12-05 12:37:37.903555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:07.186 [2024-12-05 12:37:37.903699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:07.187 [2024-12-05 12:37:37.903791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:07.187 [2024-12-05 12:37:37.903810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:07.187 [2024-12-05 12:37:37.903900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:07.187 [2024-12-05 12:37:37.903920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:07.187 [2024-12-05 12:37:37.903937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:07.187 [2024-12-05 12:37:37.903952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:07.187 [2024-12-05 12:37:37.904016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:07.187 [2024-12-05 12:37:37.904262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:07.187 [2024-12-05 12:37:37.904281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:07.187 [2024-12-05 12:37:37.904296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:07.187 [2024-12-05 12:37:37.904397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:07.187 [2024-12-05 12:37:37.904451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:07.187 [2024-12-05 12:37:37.904481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:07.187 [2024-12-05 12:37:37.904498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:07.187 [2024-12-05 12:37:37.904573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:07.187 [2024-12-05 12:37:37.904604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:34:07.187 [2024-12-05 12:37:37.904623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:07.187 [2024-12-05 12:37:37.904638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:07.187 [2024-12-05 12:37:37.904681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:07.187 [2024-12-05 12:37:37.904700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:07.187 [2024-12-05 12:37:37.904715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:07.187 [2024-12-05 12:37:37.904730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:07.187 [2024-12-05 12:37:37.904779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:07.187 [2024-12-05 12:37:37.904801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:07.187 [2024-12-05 12:37:37.904817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:07.187 [2024-12-05 12:37:37.904832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:07.187 [2024-12-05 12:37:37.904951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 213.106 ms, result 0 00:34:07.755 12:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:34:07.755 12:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:07.755 12:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:34:07.755 12:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:34:07.755 12:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:34:07.755 12:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:07.755 Remove shared memory files 00:34:07.755 12:37:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:34:07.755 12:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:07.755 12:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:34:07.755 12:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:34:07.755 12:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84815 00:34:07.755 12:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:08.015 12:37:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:34:08.015 ************************************ 00:34:08.015 END TEST ftl_upgrade_shutdown 00:34:08.015 ************************************ 00:34:08.015 00:34:08.015 real 1m25.786s 00:34:08.015 user 1m56.010s 00:34:08.015 sys 0m20.919s 00:34:08.015 12:37:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.015 12:37:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:08.015 Process with pid 75335 is not found 00:34:08.015 12:37:38 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:34:08.015 12:37:38 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:08.015 12:37:38 ftl -- ftl/ftl.sh@14 -- # killprocess 75335 00:34:08.015 12:37:38 ftl -- common/autotest_common.sh@954 -- # '[' -z 75335 ']' 00:34:08.015 12:37:38 ftl -- common/autotest_common.sh@958 -- # kill -0 75335 00:34:08.015 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75335) - No such process 00:34:08.015 12:37:38 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75335 is not found' 00:34:08.015 12:37:38 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:08.015 12:37:38 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85266 00:34:08.015 12:37:38 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85266 00:34:08.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.015 12:37:38 ftl -- common/autotest_common.sh@835 -- # '[' -z 85266 ']' 00:34:08.015 12:37:38 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.015 12:37:38 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:08.015 12:37:38 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.015 12:37:38 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:08.015 12:37:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:08.015 12:37:38 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:08.015 [2024-12-05 12:37:38.747896] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:34:08.015 [2024-12-05 12:37:38.748003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85266 ] 00:34:08.275 [2024-12-05 12:37:38.906001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.275 [2024-12-05 12:37:39.026885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.265 12:37:39 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:09.266 12:37:39 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:09.266 12:37:39 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:09.266 nvme0n1 00:34:09.266 12:37:40 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:09.266 12:37:40 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:09.266 12:37:40 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:09.526 12:37:40 ftl -- ftl/common.sh@28 -- # stores=7733dfa2-0af4-45dd-b3e0-adf154e8cea2 00:34:09.526 12:37:40 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:09.526 12:37:40 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7733dfa2-0af4-45dd-b3e0-adf154e8cea2 00:34:09.787 12:37:40 ftl -- ftl/ftl.sh@23 -- # killprocess 85266 00:34:09.787 12:37:40 ftl -- common/autotest_common.sh@954 -- # '[' -z 85266 ']' 00:34:09.787 12:37:40 ftl -- common/autotest_common.sh@958 -- # kill -0 85266 00:34:09.787 12:37:40 ftl -- common/autotest_common.sh@959 -- # uname 00:34:09.787 12:37:40 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:09.787 12:37:40 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85266 00:34:09.787 killing process with pid 85266 00:34:09.787 12:37:40 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:09.787 12:37:40 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:09.787 12:37:40 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85266' 00:34:09.787 12:37:40 ftl -- common/autotest_common.sh@973 -- # kill 85266 00:34:09.787 12:37:40 ftl -- common/autotest_common.sh@978 -- # wait 85266 00:34:11.699 12:37:42 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:11.961 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:11.961 Waiting for block devices as requested 00:34:11.961 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:11.961 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:12.222 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:12.222 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:17.499 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:17.499 Remove shared memory files 00:34:17.499 12:37:48 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:17.499 12:37:48 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:17.499 12:37:48 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:17.499 12:37:48 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:17.499 12:37:48 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:17.499 12:37:48 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:17.499 12:37:48 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:17.499 ************************************ 00:34:17.499 END TEST ftl 00:34:17.499 ************************************ 00:34:17.499 00:34:17.499 real 14m54.477s 00:34:17.499 user 17m17.208s 00:34:17.499 sys 1m23.532s 00:34:17.499 12:37:48 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.499 12:37:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:17.499 12:37:48 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:17.499 12:37:48 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:17.499 12:37:48 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:17.499 12:37:48 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:17.499 12:37:48 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:17.499 12:37:48 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:17.499 12:37:48 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:17.499 12:37:48 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:17.499 12:37:48 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:17.499 12:37:48 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:17.499 12:37:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:17.499 12:37:48 -- common/autotest_common.sh@10 -- # set +x 00:34:17.499 12:37:48 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:17.499 12:37:48 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:17.499 12:37:48 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:17.499 12:37:48 -- common/autotest_common.sh@10 -- # set +x 00:34:18.889 INFO: APP EXITING 00:34:18.889 INFO: killing all VMs 00:34:18.889 INFO: killing vhost app 00:34:18.889 INFO: EXIT DONE 00:34:19.149 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:19.410 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:19.410 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:19.410 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:19.410 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:19.982 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:20.242 Cleaning 00:34:20.242 Removing: /var/run/dpdk/spdk0/config 00:34:20.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:20.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:20.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:20.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:20.243 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:20.243 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:20.243 Removing: /var/run/dpdk/spdk0 00:34:20.243 Removing: /var/run/dpdk/spdk_pid57070 00:34:20.243 Removing: /var/run/dpdk/spdk_pid57278 00:34:20.243 Removing: /var/run/dpdk/spdk_pid57496 00:34:20.243 Removing: /var/run/dpdk/spdk_pid57589 00:34:20.243 Removing: /var/run/dpdk/spdk_pid57634 00:34:20.243 Removing: /var/run/dpdk/spdk_pid57751 00:34:20.243 Removing: /var/run/dpdk/spdk_pid57769 00:34:20.243 Removing: /var/run/dpdk/spdk_pid57968 00:34:20.243 Removing: /var/run/dpdk/spdk_pid58061 00:34:20.243 Removing: /var/run/dpdk/spdk_pid58157 00:34:20.243 Removing: /var/run/dpdk/spdk_pid58268 00:34:20.243 Removing: /var/run/dpdk/spdk_pid58365 00:34:20.243 Removing: /var/run/dpdk/spdk_pid58410 00:34:20.243 Removing: /var/run/dpdk/spdk_pid58441 00:34:20.243 Removing: /var/run/dpdk/spdk_pid58517 00:34:20.243 Removing: /var/run/dpdk/spdk_pid58601 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59026 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59090 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59142 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59158 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59260 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59271 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59373 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59389 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59447 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59465 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59518 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59536 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59695 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59733 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59816 00:34:20.243 Removing: /var/run/dpdk/spdk_pid59994 00:34:20.243 Removing: /var/run/dpdk/spdk_pid60078 00:34:20.243 Removing: /var/run/dpdk/spdk_pid60114 00:34:20.243 Removing: /var/run/dpdk/spdk_pid60570 00:34:20.243 Removing: /var/run/dpdk/spdk_pid60669 00:34:20.243 Removing: /var/run/dpdk/spdk_pid60784 00:34:20.243 Removing: /var/run/dpdk/spdk_pid60837 00:34:20.243 Removing: /var/run/dpdk/spdk_pid60868 00:34:20.243 Removing: /var/run/dpdk/spdk_pid60946 00:34:20.243 Removing: /var/run/dpdk/spdk_pid61566 00:34:20.243 Removing: /var/run/dpdk/spdk_pid61608 00:34:20.243 Removing: /var/run/dpdk/spdk_pid62088 00:34:20.243 Removing: /var/run/dpdk/spdk_pid62186 00:34:20.243 Removing: /var/run/dpdk/spdk_pid62306 00:34:20.243 Removing: /var/run/dpdk/spdk_pid62359 00:34:20.243 Removing: /var/run/dpdk/spdk_pid62390 00:34:20.243 Removing: /var/run/dpdk/spdk_pid62410 00:34:20.243 Removing: /var/run/dpdk/spdk_pid64272 00:34:20.243 Removing: /var/run/dpdk/spdk_pid64409 00:34:20.243 Removing: /var/run/dpdk/spdk_pid64413 00:34:20.243 Removing: /var/run/dpdk/spdk_pid64431 00:34:20.243 Removing: /var/run/dpdk/spdk_pid64474 00:34:20.243 Removing: /var/run/dpdk/spdk_pid64478 00:34:20.243 Removing: /var/run/dpdk/spdk_pid64490 00:34:20.243 Removing: /var/run/dpdk/spdk_pid64535 00:34:20.243 Removing: /var/run/dpdk/spdk_pid64539 00:34:20.243 Removing: /var/run/dpdk/spdk_pid64551 00:34:20.243 Removing: /var/run/dpdk/spdk_pid64596 00:34:20.243 Removing: /var/run/dpdk/spdk_pid64600 00:34:20.243 Removing: /var/run/dpdk/spdk_pid64612 00:34:20.243 Removing: /var/run/dpdk/spdk_pid66004 00:34:20.243 Removing: /var/run/dpdk/spdk_pid66106 00:34:20.243 Removing: /var/run/dpdk/spdk_pid67508 00:34:20.243 Removing: /var/run/dpdk/spdk_pid69248 00:34:20.243 Removing: /var/run/dpdk/spdk_pid69322 00:34:20.243 Removing: /var/run/dpdk/spdk_pid69397 00:34:20.243 Removing: /var/run/dpdk/spdk_pid69508 00:34:20.243 Removing: /var/run/dpdk/spdk_pid69604 00:34:20.243 Removing: /var/run/dpdk/spdk_pid69701 00:34:20.243 Removing: /var/run/dpdk/spdk_pid69775 00:34:20.243 Removing: /var/run/dpdk/spdk_pid69856 00:34:20.243 Removing: /var/run/dpdk/spdk_pid69960 00:34:20.243 Removing: /var/run/dpdk/spdk_pid70052 00:34:20.243 Removing: /var/run/dpdk/spdk_pid70148 00:34:20.243 Removing: /var/run/dpdk/spdk_pid70228 00:34:20.243 Removing: /var/run/dpdk/spdk_pid70303 00:34:20.243 Removing: /var/run/dpdk/spdk_pid70413 00:34:20.243 Removing: /var/run/dpdk/spdk_pid70505 00:34:20.502 Removing: /var/run/dpdk/spdk_pid70595 00:34:20.502 Removing: /var/run/dpdk/spdk_pid70669 00:34:20.502 Removing: /var/run/dpdk/spdk_pid70744 00:34:20.502 Removing: /var/run/dpdk/spdk_pid70854 00:34:20.502 Removing: /var/run/dpdk/spdk_pid70946 00:34:20.502 Removing: /var/run/dpdk/spdk_pid71047 00:34:20.502 Removing: /var/run/dpdk/spdk_pid71121 00:34:20.502 Removing: /var/run/dpdk/spdk_pid71190 00:34:20.502 Removing: /var/run/dpdk/spdk_pid71264 00:34:20.502 Removing: /var/run/dpdk/spdk_pid71344 00:34:20.502 Removing: /var/run/dpdk/spdk_pid71447 00:34:20.502 Removing: /var/run/dpdk/spdk_pid71543 00:34:20.502 Removing: /var/run/dpdk/spdk_pid71638 00:34:20.502 Removing: /var/run/dpdk/spdk_pid71715 00:34:20.502 Removing: /var/run/dpdk/spdk_pid71797 00:34:20.502 Removing: /var/run/dpdk/spdk_pid71871 00:34:20.502 Removing: /var/run/dpdk/spdk_pid71945 00:34:20.502 Removing: /var/run/dpdk/spdk_pid72050 00:34:20.502 Removing: /var/run/dpdk/spdk_pid72147 00:34:20.502 Removing: /var/run/dpdk/spdk_pid72291 00:34:20.502 Removing: /var/run/dpdk/spdk_pid72575 00:34:20.502 Removing: /var/run/dpdk/spdk_pid72618 00:34:20.502 Removing: /var/run/dpdk/spdk_pid73073 00:34:20.502 Removing: /var/run/dpdk/spdk_pid73261 00:34:20.502 Removing: /var/run/dpdk/spdk_pid73363 00:34:20.502 Removing: /var/run/dpdk/spdk_pid73484 00:34:20.502 Removing: /var/run/dpdk/spdk_pid73532 00:34:20.502 Removing: /var/run/dpdk/spdk_pid73557 00:34:20.502 Removing: /var/run/dpdk/spdk_pid73858 00:34:20.502 Removing: /var/run/dpdk/spdk_pid73918 00:34:20.502 Removing: /var/run/dpdk/spdk_pid73993 00:34:20.502 Removing: /var/run/dpdk/spdk_pid74394 00:34:20.502 Removing: /var/run/dpdk/spdk_pid74540 00:34:20.502 Removing: /var/run/dpdk/spdk_pid75335 00:34:20.502 Removing: /var/run/dpdk/spdk_pid75467 00:34:20.502 Removing: /var/run/dpdk/spdk_pid75631 00:34:20.502 Removing: /var/run/dpdk/spdk_pid75745 00:34:20.502 Removing: /var/run/dpdk/spdk_pid76079 00:34:20.502 Removing: /var/run/dpdk/spdk_pid76371 00:34:20.502 Removing: /var/run/dpdk/spdk_pid76724 00:34:20.502 Removing: /var/run/dpdk/spdk_pid76906 00:34:20.502 Removing: /var/run/dpdk/spdk_pid77097 00:34:20.502 Removing: /var/run/dpdk/spdk_pid77145 00:34:20.502 Removing: /var/run/dpdk/spdk_pid77355 00:34:20.502 Removing: /var/run/dpdk/spdk_pid77380 00:34:20.502 Removing: /var/run/dpdk/spdk_pid77428 00:34:20.502 Removing: /var/run/dpdk/spdk_pid77699 00:34:20.502 Removing: /var/run/dpdk/spdk_pid77932 00:34:20.502 Removing: /var/run/dpdk/spdk_pid78560 00:34:20.502 Removing: /var/run/dpdk/spdk_pid79338 00:34:20.502 Removing: /var/run/dpdk/spdk_pid80060 00:34:20.502 Removing: /var/run/dpdk/spdk_pid80939 00:34:20.502 Removing: /var/run/dpdk/spdk_pid81093 00:34:20.502 Removing: /var/run/dpdk/spdk_pid81176 00:34:20.502 Removing: /var/run/dpdk/spdk_pid81752 00:34:20.502 Removing: /var/run/dpdk/spdk_pid81828 00:34:20.502 Removing: /var/run/dpdk/spdk_pid82685 00:34:20.502 Removing: /var/run/dpdk/spdk_pid83308 00:34:20.502 Removing: /var/run/dpdk/spdk_pid84266 00:34:20.502 Removing: /var/run/dpdk/spdk_pid84400 00:34:20.502 Removing: /var/run/dpdk/spdk_pid84442 00:34:20.502 Removing: /var/run/dpdk/spdk_pid84500 00:34:20.502 Removing: /var/run/dpdk/spdk_pid84563 00:34:20.502 Removing: /var/run/dpdk/spdk_pid84621 00:34:20.502 Removing: /var/run/dpdk/spdk_pid84815 00:34:20.502 Removing: /var/run/dpdk/spdk_pid84910 00:34:20.502 Removing: /var/run/dpdk/spdk_pid84977 00:34:20.502 Removing: /var/run/dpdk/spdk_pid85051 00:34:20.502 Removing: /var/run/dpdk/spdk_pid85091 00:34:20.502 Removing: /var/run/dpdk/spdk_pid85147 00:34:20.503 Removing: /var/run/dpdk/spdk_pid85266 00:34:20.503 Clean 00:34:20.503 12:37:51 -- common/autotest_common.sh@1453 -- # return 0 00:34:20.503 12:37:51 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:20.503 12:37:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:20.503 12:37:51 -- common/autotest_common.sh@10 -- # set +x 00:34:20.762 12:37:51 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:20.762 12:37:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:20.762 12:37:51 -- common/autotest_common.sh@10 -- # set +x 00:34:20.762 12:37:51 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:20.762 12:37:51 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:20.762 12:37:51 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:20.762 12:37:51 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:20.762 12:37:51 -- spdk/autotest.sh@398 -- # hostname 00:34:20.762 12:37:51 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:20.762 geninfo: WARNING: invalid characters removed from testname! 00:34:47.320 12:38:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:49.842 12:38:20 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:53.120 12:38:23 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:55.639 12:38:25 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:58.163 12:38:28 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:00.086 12:38:30 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:02.618 12:38:33 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:02.619 12:38:33 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:02.619 12:38:33 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:02.619 12:38:33 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:02.619 12:38:33 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:02.619 12:38:33 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:02.619 + [[ -n 5021 ]] 00:35:02.619 + sudo kill 5021 00:35:02.626 [Pipeline] } 00:35:02.641 [Pipeline] // timeout 00:35:02.646 [Pipeline] } 00:35:02.661 [Pipeline] // stage 00:35:02.666 [Pipeline] } 00:35:02.681 [Pipeline] // catchError 00:35:02.690 [Pipeline] stage 00:35:02.692 [Pipeline] { (Stop VM) 00:35:02.704 [Pipeline] sh 00:35:02.979 + vagrant halt 00:35:05.500 ==> default: Halting domain... 00:35:12.071 [Pipeline] sh 00:35:12.347 + vagrant destroy -f 00:35:14.933 ==> default: Removing domain... 00:35:15.884 [Pipeline] sh 00:35:16.168 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 00:35:16.177 [Pipeline] } 00:35:16.194 [Pipeline] // stage 00:35:16.201 [Pipeline] } 00:35:16.216 [Pipeline] // dir 00:35:16.223 [Pipeline] } 00:35:16.239 [Pipeline] // wrap 00:35:16.246 [Pipeline] } 00:35:16.262 [Pipeline] // catchError 00:35:16.273 [Pipeline] stage 00:35:16.275 [Pipeline] { (Epilogue) 00:35:16.290 [Pipeline] sh 00:35:16.569 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:24.691 [Pipeline] catchError 00:35:24.693 [Pipeline] { 00:35:24.702 [Pipeline] sh 00:35:24.975 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:24.975 Artifacts sizes are good 00:35:24.981 [Pipeline] } 00:35:24.995 [Pipeline] // catchError 00:35:25.007 [Pipeline] archiveArtifacts 00:35:25.013 Archiving artifacts 00:35:25.136 [Pipeline] cleanWs 00:35:25.147 [WS-CLEANUP] Deleting project workspace... 00:35:25.147 [WS-CLEANUP] Deferred wipeout is used... 00:35:25.153 [WS-CLEANUP] done 00:35:25.155 [Pipeline] } 00:35:25.171 [Pipeline] // stage 00:35:25.176 [Pipeline] } 00:35:25.191 [Pipeline] // node 00:35:25.196 [Pipeline] End of Pipeline 00:35:25.237 Finished: SUCCESS